Difference between revisions of "Testing: Conduct search engine discovery/reconnaissance for information leakage (OTG-INFO-001)"

From OWASP
Jump to: navigation, search
(1st Draft - OWASP Testing Guide v3)
m (Added PunkSPIDER, Google Hacking Database, changed reference numbers to make them unique)
(43 intermediate revisions by 11 users not shown)
Line 1: Line 1:
{{Template:OWASP Testing Guide v3}}
+
{{Template:OWASP Testing Guide v4}}
  
'''This is a draft of a section of the new Testing Guide v3'''
+
== Summary ==
 +
There are direct and indirect elements to search engine discovery and reconnaissance. Direct methods relate to searching the indexes and the associated content from caches. Indirect methods relate to gleaning sensitive design and configuration information by searching forums, newsgroups and tendering websites.
  
== Brief Summary ==
+
Once a search engine robot has completed crawling, it commences indexing the web page based on tags and associated attributes, such as <TITLE>, in order to return the relevant search results. [1]
<br>
+
This section describes how to retrieve and remove the web content stored by the Google Cache of the application being tested.
+
  
This procedure is applicable to other Search Engines such as Live, Yahoo!, etc.  
+
If the robots.txt file is not updated during the lifetime of the web site, and inline HTML meta tags that instruct robots not to index content ahve not been used, then it is possible for indexes to contain web content not intended to be included in by the owners. Website owners may use the previously mentioned robots.txt, HTML meta tags, authentication and tools provided by search engines to remove such content.
<br>
+
  
== Description of the Issue ==  
+
== Test Objectives ==
<br>
+
The Google search engine found at http://www.google.com offers many features, including language and document translation; web, image, newsgroups, catalog, and news searches; and more. These features offer obvious benefits to even the most uninitiated web surfer, but these same features offer far more nefarious possibilities to the most malicious Internet users, including hackers, computer criminals, identity thieves, and even terrorists. This article outlines the more harmful applications of the Google search engine, techniques that have collectively been termed "Google Hacking." In 1992, there were about 15,000 web sites, in 2006 the number has exceeded 100 million. What if a simple query to a search engine like Google such as "Hackable Websites w/ Credit Card Information" produced a list of websites that contained customer credit card data of thousands of customers per company? If the attacker is aware of a web application that stores a clear text password file in a directory and wants to gather these targets, then he could search on "intitle:"Index of" .mysql_history" and the search engine will provide him with a list of target systems that may divulge these database usernames and passwords (out of a possible 100 million web sites available). Or perhaps the attacker has a new method to attack a Lotus Notes web server and simply wants to see how many targets are on the internet, he could search on "inurl:domcfg.nsf". Apply the same logic to a worm looking for its new victim.
+
<br>
+
  
== Black Box testing and example ==
+
To understand what sensitive design and configuration information is exposed of the application/system/organisation both directly (on the organisation's website) or indirectly (on a third party website)
'''Description and goal'''
+
  
The scope of this activity is to find information about a single web site published on the internet or to find a specific kind of application such as Webmin or VNC.
+
== How to Test ==
There are tools available that can assist with this technique, for example googlegath, but it is also possibile to perform this operation manually using Google's web site search facilities.  This operation does not require specialist technical skills and is a good way to collect information about a web target. 
+
  
 +
Using a search engine, search for:
 +
* Network diagrams and configurations
 +
* Archived posts and emails by administrators and other key staff
 +
* Logon procedures and username formats
 +
* Usernames and passwords
 +
* Error message content
 +
* Development, test, UAT and staging versions of the website
  
'''Useful Google Advanced Search techniques '''
+
=== Black Box Testing ===
 +
Using the advanced "site:" search operator, it is possible to restrict search results to a specific domain [2]. Do not limit testing to just one search engine provider - they may generate different results depending on when they crawled content and their own algorithms. Consider:
  
* Use the plus sign (+) to force a search for an overly common word. Use the minus sign (-) to exclude a term from a search. No spaces follow these signs.
+
* Baidu
* To search for a phrase, supply the phrase surrounded by double quotes (" ").
+
* binsearch.info
* A period (.) serves as a single-character wildcard.
+
* Bing
* An asterisk (*) represents any word —- not the completion of a word, as is traditionally used.
+
* Duck Duck Go
 +
* ixquick/Startpage
 +
* Google
 +
* Shodan
 +
* PunkSpider
  
Google advanced operators help refine searches. Advanced operators use the following syntax: operator:search_term . Notice that there is no space between the operator, the colon, and the search term. A list of operators and search terms follows:
+
Duck Duck Go and ixquick/Startpage provide reduced information leakage about the tester.
* The ''site'' operator instructs Google to restrict a search to a specific web site or domain. The web site to search must be supplied after the colon.
+
* The ''filetype'' operator instructs Google to search only within the text of a particular type of file. The file type to search must be supplied after the colon. Don't include a period before the file extension.
+
* The ''link'' operator instructs Google to search within hyperlinks for a search term.
+
* The ''cache'' operator displays the version of a web page as it appeared when Google crawled the site. The URL of the site must be supplied after the colon.
+
* The ''intitle'', ''allintitle'' operator instructs Google to search for a term within the title of a document.
+
* The ''inurl'', ''allinurl'' operator instructs Google to search only within the URL (web address) of a document. The search term must follow the colon.
+
* The ''info'' operator instructs Google to search only within the summary information of a site
+
* The ''phonebook'' operator instructs Google to search business or residential phone listing.
+
* The ''stocks'' operator instructs Google to search for stock market information about a company.
+
* The ''bphonebook'' operator instructs Google to search business phone listing only.  
+
The following are a set googling examples (for a complete list look at [1]):
+
  
'''Test:'''
+
Google provides the Advanced "cache:" search operator [2], but this is the equivalent to clicking the "Cached" next to each Google Search Result.  Hence, the use of the Advanced "site:" Search Operator and then clicking "Cached" is preferred.
  
<pre>
+
The Google SOAP Search API supports the doGetCachedPage and the associated doGetCachedPageResponse SOAP Messages [3] to assist with retrieving cached pages. An implementation of this is under development by the [[::Category:OWASP_Google_Hacking_Project |OWASP "Google Hacking" Project]].
site:www.xxxxx.ca AND intitle:"index.of" "backup"
+
</pre>
+
  
'''Result:'''
+
PunkSpider is web application vulnerability search engine. It has little use for pentester doing manual work. However it can be useful as demonstration of easiness of finding vulnerabilities by script-kiddies.
  
The operator site: restricts a search in a specific domain, while the intitle: operator makes it possibile to find the pages that contain "index of backup" as a link title of the Google output.<br>
 
The AND boolean operator is used to combine more conditions in the same query.
 
  
 +
==== Example ====
 +
To find the web content of owasp.org indexed by a typical search engine, the syntax required is:
 
<pre>
 
<pre>
Index of /backup/
+
site:owasp.org
 
+
Name                    Last modified      Size  Description
+
 
+
Parent Directory        21-Jul-2004 17:48      - 
+
 
</pre>
 
</pre>
 +
[[Image:Google_site_Operator_Search_Results_Example_20121219.jpg]]
  
'''Test:'''
+
To display the index.html of owasp.org as cached, the syntax is:
 
+
 
<pre>
 
<pre>
"Login to Webmin" inurl:10000
+
cache:owasp.org
 
</pre>
 
</pre>
 +
[[Image:Google_cache_Operator_Search_Results_Example_20121219.jpg]]
  
'''Result:'''
+
==== Google Hacking Database ====
  
The query produces an output with every Webmin authentication interface collected by Google during the spidering process.
+
Google Hacking Database is list of useful search queries for for google. Queries are put in several categories:
 +
* Footholds
 +
* Files containing usernames
 +
* Sensitive Directories
 +
* Web Server Detection
 +
* Vulnerable Files
 +
* Vulnerable Servers
 +
* Error Messages
 +
* Files containing juicy info
 +
* Files containing passwords
 +
* Sensitive Online Shopping Info
  
'''Test:'''
 
  
<pre>
+
=== Gray Box testing and example ===
site:www.xxxx.org AND filetype:wsdl wsdl
+
Gray Box testing is the same as Black Box testing above.
</pre>
+
  
'''Result:'''
 
  
The filetype operator is used to find specific kind of files on the web-site.
 
  
'''How can you prevent Google hacking?'''
+
== Vulnerability References ==
+
'''Web'''<br>
Make sure you are comfortable with sharing everything in your public Web folder with the whole world, because Google will share it, whether you like it or not. Also, in order to prevent attackers from easily figuring out what server software you are running, change the default error messages and other identifiers. Often, when a "404 Not Found" error is detected, servers will return a page like that says something like:  
+
[1] "Google Basics: Learn how Google Discovers, Crawls, and Serves Web Pages" - http://www.google.com/support/webmasters/bin/answer.py?answer=70897 <br>
 +
[2] "Operators and More Search Help" - http://support.google.com/websearch/bin/answer.py?hl=en&answer=136861 <br>
 +
[3] "Google Hacking Database" - http://www.exploit-db.com/google-dorks/ <br>
  
<pre>
+
== Tools ==
Not Found
+
[4] FoundStone SiteDigger - http://www.mcafee.com/uk/downloads/free-tools/sitedigger.aspx <br>
The requested URL /cgi-bin/xxxxxx was not found on this server.
+
[5] Google Hacker - http://yehg.net/lab/pr0js/files.php/googlehacker.zip<br>
Apache/1.3.27 Server at your web site Port 80
+
[6] Stach & Liu's Google Hacking Diggity Project - http://www.stachliu.com/resources/tools/google-hacking-diggity-project/ <br>
</pre>
+
[7] PunkSPIDER - http://punkspider.hyperiongray.com/ <br>
 
+
The only information that the legimitate user really needs is a message that says "Page Not found." Restricting the other information will prevent your page from turning up in an attacker's search for a specific flavor of server.
+
Google periodically purges it's cache, but until then your sensitive files are still being offered to the public. If you realize that the search engine has cached files that you want to be unavailable to be viewed you can go to  http://www.google.com/remove.html  and follow the instructions on how to remove your page, or parts of your page, from their database.
+
 
+
'''Using a search engine to discover virtual hosts'''
+
 
+
Live.com, another well-known search engine (see link at the bottom of the page), provides the "ip" operator, which returns all the pages that are known to belong to a certain IP address. This is a very useful technique to find out which virtual hosts are configured on the tested server. For instance, the following query will return all indexed pages belonging to the domain owasp.org:
+
<pre>
+
ip:216.48.3.18
+
</pre>
+
  
== Gray Box testing and example ==  
+
== Remediation ==
Grey Box testing is the same as Black Box testing above
+
Carefully consider the sensitivity of design and configuration information before it is posted online.
  
== References ==
+
Periodically review the sensitivity of existing design and configuration information that is posted online.
'''Whitepapers'''<br>
+
"Against the System: Rise of the Robots" - Michal Zalewski - http://www.phrack.org/issues.html?issue=57&id=10#article<BR>
+
<BR>
+
'''Tools'''<br>
+
Google SOAP Search API - http://code.google.com/apis/soapsearch/<BR>
+
Google Hacking Database (GHDB) - http://johnny.ihackstuff.com/ghdb.php<BR>
+
GHDB Tool from GNUCITIZEN - http://www.gnucitizen.org/ghdb<BR>
+
Goolag from cDC - http://www.goolag.org/download.html
+
<br>
+

Revision as of 10:17, 11 December 2013

This article is part of the new OWASP Testing Guide v4. 
At the moment the project is in the REVIEW phase.

Back to the OWASP Testing Guide v4 ToC: https://www.owasp.org/index.php/OWASP_Testing_Guide_v4_Table_of_Contents Back to the OWASP Testing Guide Project: http://www.owasp.org/index.php/OWASP_Testing_Project

Contents


Summary

There are direct and indirect elements to search engine discovery and reconnaissance. Direct methods relate to searching the indexes and the associated content from caches. Indirect methods relate to gleaning sensitive design and configuration information by searching forums, newsgroups and tendering websites.

Once a search engine robot has completed crawling, it commences indexing the web page based on tags and associated attributes, such as <TITLE>, in order to return the relevant search results. [1]

If the robots.txt file is not updated during the lifetime of the web site, and inline HTML meta tags that instruct robots not to index content ahve not been used, then it is possible for indexes to contain web content not intended to be included in by the owners. Website owners may use the previously mentioned robots.txt, HTML meta tags, authentication and tools provided by search engines to remove such content.

Test Objectives

To understand what sensitive design and configuration information is exposed of the application/system/organisation both directly (on the organisation's website) or indirectly (on a third party website)

How to Test

Using a search engine, search for:

  • Network diagrams and configurations
  • Archived posts and emails by administrators and other key staff
  • Logon procedures and username formats
  • Usernames and passwords
  • Error message content
  • Development, test, UAT and staging versions of the website

Black Box Testing

Using the advanced "site:" search operator, it is possible to restrict search results to a specific domain [2]. Do not limit testing to just one search engine provider - they may generate different results depending on when they crawled content and their own algorithms. Consider:

  • Baidu
  • binsearch.info
  • Bing
  • Duck Duck Go
  • ixquick/Startpage
  • Google
  • Shodan
  • PunkSpider

Duck Duck Go and ixquick/Startpage provide reduced information leakage about the tester.

Google provides the Advanced "cache:" search operator [2], but this is the equivalent to clicking the "Cached" next to each Google Search Result. Hence, the use of the Advanced "site:" Search Operator and then clicking "Cached" is preferred.

The Google SOAP Search API supports the doGetCachedPage and the associated doGetCachedPageResponse SOAP Messages [3] to assist with retrieving cached pages. An implementation of this is under development by the OWASP "Google Hacking" Project.

PunkSpider is web application vulnerability search engine. It has little use for pentester doing manual work. However it can be useful as demonstration of easiness of finding vulnerabilities by script-kiddies.


Example

To find the web content of owasp.org indexed by a typical search engine, the syntax required is:

site:owasp.org

Google site Operator Search Results Example 20121219.jpg

To display the index.html of owasp.org as cached, the syntax is:

cache:owasp.org

Google cache Operator Search Results Example 20121219.jpg

Google Hacking Database

Google Hacking Database is list of useful search queries for for google. Queries are put in several categories:

  • Footholds
  • Files containing usernames
  • Sensitive Directories
  • Web Server Detection
  • Vulnerable Files
  • Vulnerable Servers
  • Error Messages
  • Files containing juicy info
  • Files containing passwords
  • Sensitive Online Shopping Info


Gray Box testing and example

Gray Box testing is the same as Black Box testing above.


Vulnerability References

Web
[1] "Google Basics: Learn how Google Discovers, Crawls, and Serves Web Pages" - http://www.google.com/support/webmasters/bin/answer.py?answer=70897
[2] "Operators and More Search Help" - http://support.google.com/websearch/bin/answer.py?hl=en&answer=136861
[3] "Google Hacking Database" - http://www.exploit-db.com/google-dorks/

Tools

[4] FoundStone SiteDigger - http://www.mcafee.com/uk/downloads/free-tools/sitedigger.aspx
[5] Google Hacker - http://yehg.net/lab/pr0js/files.php/googlehacker.zip
[6] Stach & Liu's Google Hacking Diggity Project - http://www.stachliu.com/resources/tools/google-hacking-diggity-project/
[7] PunkSPIDER - http://punkspider.hyperiongray.com/

Remediation

Carefully consider the sensitivity of design and configuration information before it is posted online.

Periodically review the sensitivity of existing design and configuration information that is posted online.