Difference between revisions of "Top 10 2013-Note About Risks"

From OWASP
Jump to: navigation, search
(Added missing (last) paragraph: The following illustrates ... (value 1 to 3).)
 
(16 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{Top_10_2010:SubsectionAdvancedTemplate|type={{Top_10_2010:StyleTemplate}}|title=Start Your Application Security Program Now|number=whole|year=2013}}
+
{{Top_10_2013:TopTemplate
 +
    |usenext=2013NextLink
 +
    |next={{Top_10:LanguageFile|text=detailsAboutRiskFactors|language=en}}
 +
    |useprev=2013PrevLink
 +
    |prev={{Top_10:LanguageFile|text=whatsNextforOrganizations|language=en}}
 +
    |year=2013
 +
    |language=en
 +
}}
 +
 
 +
{{Top_10:SubsectionTableBeginTemplate|type=main}} {{Top_10_2010:SubsectionAdvancedTemplate|type={{Top_10_2010:StyleTemplate}}|subsection=freetext|position=firstWhole|title={{Top_10:LanguageFile|text=itsAboutRisksNotWeaknesses|language=en}}|width=100%|year=2013}}
  
Although the 2007 and earlier versions of the OWASP Top 10 focused on identifying the most common “vulnerabilities,” the OWASP Top 10 has always been organized around risks. This has caused some understandable confusion on the part of people searching for an airtight weakness taxonomy. The OWASP Top 10 for 2010 clarified the risk-focus in the Top 10 by being very explicit about how threat agents, attack vectors, weaknesses, technical impacts, and business impacts combine to produce risks. This version of the OWASP Top 10 follows the same methodology.
 
  
The Risk Rating methodology for the Top 10 is based on the OWASP Risk Rating Methodology. For each Top 10 item, we estimated the typical risk that each weakness introduces to a typical web application by looking at common likelihood factors and impact factors for each common weakness. We then rank ordered the Top 10 according to those weaknesses that typically introduce the most significant risk to an application.
+
Although the [https://www.owasp.org/index.php/Top_10_2007 |2007] and earlier versions of the [https://www.owasp.org/index.php/Top10 OWASP Top 10] focused on identifying the most common “vulnerabilities,” the OWASP Top 10 has always been organized around risks. This has caused some understandable confusion on the part of people searching for an airtight weakness taxonomy. The [https://www.owasp.org/index.php/Top_10_2010 OWASP Top 10 for 2010] clarified the risk-focus in the Top 10 by being very explicit about how threat agents, attack vectors, weaknesses, technical impacts, and business impacts combine to produce risks. This version of the OWASP Top 10 follows the same methodology.Although the [https://www.owasp.org/index.php/Top_10_2007  2007] and earlier versions of the [https://www.owasp.org/index.php/Top10  OWASP Top 10] focused on identifying the most common “vulnerabilities,” the OWASP Top 10 has always been organized around risks. This has caused some understandable confusion on the part of people searching for an airtight weakness taxonomy. The [https://www.owasp.org/index.php/Top_10_2010  OWASP Top 10 for 2010] clarified the risk-focus in the Top 10 by being very explicit about how threat agents, attack vectors, weaknesses, technical impacts, and business impacts combine to produce risks. This version of the OWASP Top 10 follows the same methodology.
  
The OWASP Risk Rating Methodology defines numerous factors to help calculate the risk of an identified vulnerability. However, the Top 10 must talk about generalities, rather than specific vulnerabilities in real applications. Consequently, we can never be as precise as a system owner can when calculating risk for their application(s). We don’t know how important your applications and data are, what your threat agents are, nor how your system has been built and is being operated.
+
The Risk Rating methodology for the Top 10 is based on the [https://www.owasp.org/index.php/OWASP_Risk_Rating_Methodology  OWASP Risk Rating Methodology]. For each Top 10 item, we estimated the typical risk that each weakness introduces to a typical web application by looking at common likelihood factors and impact factors for each common weakness. We then rank ordered the Top 10 according to those weaknesses that typically introduce the most significant risk to an application.
  
Our methodology includes 3 likelihood factors for each weakness (prevalence, detectability, and ease of exploit) and one impact factor (technical impact). The prevalence of a weakness is a factor that you typically don’t have to calculate. For prevalence data, we have been supplied prevalence statistics from a number of different organizations and we have averaged their data together to come up with a Top 10 likelihood of existence list by prevalence. This data was then combined with the other two likelihood factors (detectability and ease of exploit) to calculate a likelihood rating for each weakness. This was then multiplied by our estimated average technical impact for each item to come up with an overall risk ranking for each item in the Top 10.
+
The [https://www.owasp.org/index.php/OWASP_Risk_Rating_Methodology  OWASP Risk Rating Methodology] defines numerous factors to help calculate the risk of an identified vulnerability. However, the Top 10 must talk about generalities, rather than specific vulnerabilities in real applications. Consequently, we can never be as precise as system owners can be when calculating risks for their application(s). You are best equipped to judge the importance of your applications and data, what your threat agents are, and how your system has been built and is being operated.
  
Note that this approach does not take the likelihood of the threat agent into account. Nor does it account for any of the various technical details associated with your particular application. Any of these factors could significantly affect the overall likelihood of an attacker finding and exploiting a particular vulnerability. This rating also does not take into account the actual impact on your business. Your organization will have to decide how much security risk from applications the organization is willing to accept. The purpose of the OWASP Top 10 is not to do this risk analysis for you.
+
Our methodology includes three likelihood factors for each weakness (prevalence, detectability, and ease of exploit) and one impact factor (technical impact). The prevalence of a weakness is a factor that you typically don’t have to calculate. For prevalence data, we have been supplied prevalence statistics from a number of different organizations (as referenced in the Acknowledgements section on page 3) and we have averaged their data together to come up with a Top 10 likelihood of existence list by prevalence. This data was then combined with the other two likelihood factors (detectability and ease of exploit) to calculate a likelihood rating for each weakness. This was then multiplied by our estimated average technical impact for each item to come up with an overall risk ranking for each item in the Top 10.
  
The following illustrates our calculation of the risk for A3: Cross-Site Scripting, as an example. Note that XSS is so prevalent that it warranted the only ‘VERY WIDESPREAD’ prevalence value. All other risks ranged from widespread to uncommon (values 1 to 3).
+
Note that this approach does not take the likelihood of the threat agent into account. Nor does it account for any of the various technical details associated with your particular application. Any of these factors could significantly affect the overall likelihood of an attacker finding and exploiting a particular vulnerability. This rating also does not take into account the actual impact on your business. Your organization will have to decide how much security risk from applications the organization is willing to accept given your culture, industry, and regulatory environment. The purpose of the OWASP Top 10 is not to do this risk analysis for you.
  
 +
The following illustrates our calculation of the risk for A3: Cross-Site Scripting, as an example. XSS is so prevalent it warranted the only ‘VERY WIDESPREAD’ prevalence value of 0. All other risks ranged from widespread to uncommon (value 1 to 3).
  
 +
{{Top_10_2010:SummaryTableHeaderBeginTemplate|year=2013|language=en}}
 +
{{Top_10:SummaryTableTemplate|exploitability=2|prevalence=0|detectability=1|impact=2|year=2013|language=en}}
 +
{{Top_10_2010:SummaryTableHeaderEndTemplate|year=2013|language=en}}
 +
<td>&nbsp;</td>
 +
<td style="text-align: center; padding: 4px; font-size: 200%; font-weight: bold; border: 3px solid #444444;">2</td>
 +
<td style="text-align: center; padding: 4px; font-size: 200%; font-weight: bold; border: 3px solid #444444;">0</td>
 +
<td style="text-align: center; padding: 4px; font-size: 200%; font-weight: bold; border: 3px solid #444444;">1</td>
 +
<td style="text-align: center; padding: 4px; font-size: 200%; font-weight: bold; border: 3px solid #444444;">2</td>
 +
<td>&nbsp;</td></tr>
 +
<tr>
 +
<td>&nbsp;</td>
 +
<td colspan="3"  style="border: #4d953d 1px solid; background-color: #D9D9D9; text-align: center; padding: 4px;">
 +
<span style="font-weight: bold; font-size: 150%; color: red;">Likelihood Rating: 1</span><br/>
 +
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(Average of Exploitability, Prevalence and Detectability)</td>
 +
<td style="text-align: center; padding: 4px; font-size: 200%; font-weight: bold; solid #444444;">*&nbsp;2&nbsp;&nbsp;</td>
 +
<td>&nbsp;</td></tr>
 +
<tr>
 +
<td>&nbsp;</td>
 +
<td>&nbsp;</td>
 +
<td colspan="3"  style="border: #4d953d 1px solid; background-color: #D9D9D9; text-align: center; padding: 4px;">
 +
<span style="font-weight: bold; font-size: 150%; color: red;">Risk Ranking: 2</span><br/>
 +
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(Likelihood * Impact)
 +
<td>&nbsp;</td>
 +
{{Top_10_2010:SummaryTableEndTemplate|year=2013}}
 +
<br/>
 +
{{Top_10:SubsectionTableEndTemplate}}
 
{{Top_10_2013:BottomTemplate
 
{{Top_10_2013:BottomTemplate
 
     |type={{Top_10_2010:StyleTemplate}}
 
     |type={{Top_10_2010:StyleTemplate}}
 
     |usenext=2013NextLink
 
     |usenext=2013NextLink
     |next=Note About Risks
+
     |next={{Top_10:LanguageFile|text=detailsAboutRiskFactors|language=en}}
 
     |useprev=2013PrevLink
 
     |useprev=2013PrevLink
     |prev=What's Next for Verifiers
+
     |prev={{Top_10:LanguageFile|text=whatsNextforOrganizations|language=en}}
 +
    |year=2013
 +
    |language=en
 
}}
 
}}

Latest revision as of 14:49, 20 June 2013

← What's Next for Organizations
2013 Table of Contents

2013 Top 10 List

Details About Risk Factors →
It's About Risks, Not Weaknesses


Although the |2007 and earlier versions of the OWASP Top 10 focused on identifying the most common “vulnerabilities,” the OWASP Top 10 has always been organized around risks. This has caused some understandable confusion on the part of people searching for an airtight weakness taxonomy. The OWASP Top 10 for 2010 clarified the risk-focus in the Top 10 by being very explicit about how threat agents, attack vectors, weaknesses, technical impacts, and business impacts combine to produce risks. This version of the OWASP Top 10 follows the same methodology.Although the 2007 and earlier versions of the OWASP Top 10 focused on identifying the most common “vulnerabilities,” the OWASP Top 10 has always been organized around risks. This has caused some understandable confusion on the part of people searching for an airtight weakness taxonomy. The OWASP Top 10 for 2010 clarified the risk-focus in the Top 10 by being very explicit about how threat agents, attack vectors, weaknesses, technical impacts, and business impacts combine to produce risks. This version of the OWASP Top 10 follows the same methodology.

The Risk Rating methodology for the Top 10 is based on the OWASP Risk Rating Methodology. For each Top 10 item, we estimated the typical risk that each weakness introduces to a typical web application by looking at common likelihood factors and impact factors for each common weakness. We then rank ordered the Top 10 according to those weaknesses that typically introduce the most significant risk to an application.

The OWASP Risk Rating Methodology defines numerous factors to help calculate the risk of an identified vulnerability. However, the Top 10 must talk about generalities, rather than specific vulnerabilities in real applications. Consequently, we can never be as precise as system owners can be when calculating risks for their application(s). You are best equipped to judge the importance of your applications and data, what your threat agents are, and how your system has been built and is being operated.

Our methodology includes three likelihood factors for each weakness (prevalence, detectability, and ease of exploit) and one impact factor (technical impact). The prevalence of a weakness is a factor that you typically don’t have to calculate. For prevalence data, we have been supplied prevalence statistics from a number of different organizations (as referenced in the Acknowledgements section on page 3) and we have averaged their data together to come up with a Top 10 likelihood of existence list by prevalence. This data was then combined with the other two likelihood factors (detectability and ease of exploit) to calculate a likelihood rating for each weakness. This was then multiplied by our estimated average technical impact for each item to come up with an overall risk ranking for each item in the Top 10.

Note that this approach does not take the likelihood of the threat agent into account. Nor does it account for any of the various technical details associated with your particular application. Any of these factors could significantly affect the overall likelihood of an attacker finding and exploiting a particular vulnerability. This rating also does not take into account the actual impact on your business. Your organization will have to decide how much security risk from applications the organization is willing to accept given your culture, industry, and regulatory environment. The purpose of the OWASP Top 10 is not to do this risk analysis for you.

The following illustrates our calculation of the risk for A3: Cross-Site Scripting, as an example. XSS is so prevalent it warranted the only ‘VERY WIDESPREAD’ prevalence value of 0. All other risks ranged from widespread to uncommon (value 1 to 3).

Threat Agents Attack Vectors Security Weakness Technical Impacts Business Impacts
Application Specific Exploitability
AVERAGE
Prevalence
VERY WIDESPREAD
Detectability
EASY
Impact
MODERATE
Application / Business Specific
  2 0 1 2  
 

Likelihood Rating: 1

     (Average of Exploitability, Prevalence and Detectability)
* 2    
   

Risk Ranking: 2
     (Likelihood * Impact)

 


← What's Next for Organizations
2013 Table of Contents

2013 Top 10 List

Details About Risk Factors →

© 2002-2013 OWASP Foundation This document is licensed under the Creative Commons Attribution-ShareAlike 3.0 license. Some rights reserved. CC-by-sa-3 0-88x31.png