Difference between revisions of "Conduct search engine discovery/reconnaissance for information leakage (OTG-INFO-001)"

From OWASP
Jump to: navigation, search
(1st Draft - OWASP Testing Guide v3)
(Fk n)
(48 intermediate revisions by 13 users not shown)
Line 1: Line 1:
{{Template:OWASP Testing Guide v3}}
+
{{Template:OWASP Testing Guide v4}}
  
'''This is a draft of a section of the new Testing Guide v3'''
+
== Summary ==
 +
There are direct and indirect elements to search engine discovery and reconnaissance. Direct methods relate to searching the indexes and the associated content from caches. Indirect methods relate to gleaning sensitive design and configuration information by searching forums, newsgroups, and tendering websites.
  
== Brief Summary ==
 
<br>
 
This section describes how to retrieve and remove the web content stored by the Google Cache of the application being tested.
 
  
This procedure is applicable to other Search Engines such as Live, Yahoo!, etc.  
+
Once a search engine robot has completed crawling, it commences indexing the web page based on tags and associated attributes, such as <TITLE>, in order to return the relevant search results [1]. If the robots.txt file is not updated during the lifetime of the web site, and inline HTML meta tags that instruct robots not to index content have not been used, then it is possible for indexes to contain web content not intended to be included in by the owners. Website owners may use the previously mentioned robots.txt, HTML meta tags, authentication, and tools provided by search engines to remove such content.
<br>
+
  
== Description of the Issue ==
 
<br>
 
When web content is indexed by Google their Crawler referred to as "Googlebot" refers to the robots.txt file in the web root directory.
 
  
If the robots.txt file is not revised during changes to web content, then it is possible for web content not intended to be included in Google's Search Results to be indexed by Google.
+
== Test Objectives ==
  
Therefore, this web content must be removed from the Google Cache.
+
To understand what sensitive design and configuration information of the application/system/organization is exposed both directly (on the organization's website) or indirectly (on a third party website).
<br>
+
  
== Black Box testing and example ==
 
'''Description and goal'''
 
  
The scope of this activity is to find information about a single web site published on the internet or to find a specific kind of application such as Webmin or VNC.
+
== How to Test ==
There are tools available that can assist with this technique, for example googlegath, but it is also possibile to perform this operation manually using Google's web site search facilities.  This operation does not require specialist technical skills and is a good way to collect information about a web target. 
+
  
 +
Use a search engine to search for:
 +
* Network diagrams and configurations
 +
* Archived posts and emails by administrators and other key staff
 +
* Log on procedures and username formats
 +
* Usernames and passwords
 +
* Error message content
 +
* Development, test, UAT and staging versions of the website
  
'''Useful Google Advanced Search techniques '''
 
  
* Use the plus sign (+) to force a search for an overly common word. Use the minus sign (-) to exclude a term from a search. No spaces follow these signs.
+
=== Black Box Testing ===
* To search for a phrase, supply the phrase surrounded by double quotes (" ").
+
Using the advanced "site:" search operator, it is possible to restrict search results to a specific domain [2]. Do not limit testing to just one search engine provider as they may generate different results depending on when they crawled content and their own algorithms. Consider using the following search engines:
* A period (.) serves as a single-character wildcard.
+
* An asterisk (*) represents any word —- not the completion of a word, as is traditionally used.
+
  
Google advanced operators help refine searches. Advanced operators use the following syntax: operator:search_term . Notice that there is no space between the operator, the colon, and the search term. A list of operators and search terms follows:
+
* Baidu
* The ''site'' operator instructs Google to restrict a search to a specific web site or domain. The web site to search must be supplied after the colon.
+
* binsearch.info
* The ''filetype'' operator instructs Google to search only within the text of a particular type of file. The file type to search must be supplied after the colon. Don't include a period before the file extension.
+
* Bing
* The ''link'' operator instructs Google to search within hyperlinks for a search term.
+
* Duck Duck Go
* The ''cache'' operator displays the version of a web page as it appeared when Google crawled the site. The URL of the site must be supplied after the colon.
+
* ixquick/Startpage
* The ''intitle'', ''allintitle'' operator instructs Google to search for a term within the title of a document.
+
* Google
* The ''inurl'', ''allinurl'' operator instructs Google to search only within the URL (web address) of a document. The search term must follow the colon.
+
* Shodan
* The ''info'' operator instructs Google to search only within the summary information of a site
+
* PunkSpider
* The ''phonebook'' operator instructs Google to search business or residential phone listing.
+
* The ''stocks'' operator instructs Google to search for stock market information about a company.
+
* The ''bphonebook'' operator instructs Google to search business phone listing only.
+
The following are a set googling examples (for a complete list look at [1]):
+
  
'''Test:'''
 
  
<pre>
+
Duck Duck Go and ixquick/Startpage provide reduced information leakage about the tester.
site:www.xxxxx.ca AND intitle:"index.of" "backup"
+
</pre>
+
  
'''Result:'''
+
Google provides the Advanced "cache:" search operator [2], but this is the equivalent to clicking the "Cached" next to each Google Search Result.  Hence, the use of the Advanced "site:" Search Operator and then clicking "Cached" is preferred.
  
The operator site: restricts a search in a specific domain, while the intitle: operator makes it possibile to find the pages that contain "index of backup" as a link title of the Google output.<br>
+
The Google SOAP Search API supports the doGetCachedPage and the associated doGetCachedPageResponse SOAP Messages [3] to assist with retrieving cached pages. An implementation of this is under development by the [[::Category:OWASP_Google_Hacking_Project |OWASP "Google Hacking" Project]].
The AND boolean operator is used to combine more conditions in the same query.
+
  
<pre>
+
PunkSpider is web application vulnerability search engine. It is of little use for a penetration tester doing manual work. However it can be useful as demonstration of easiness of finding vulnerabilities by script-kiddies.
Index of /backup/
+
  
Name                    Last modified      Size  Description
 
  
Parent Directory        21-Jul-2004 17:48      - 
+
==== Example ====
 +
To find the web content of owasp.org indexed by a typical search engine, the syntax required is:
 +
<pre>
 +
site:owasp.org
 
</pre>
 
</pre>
 +
[[Image:Google_site_Operator_Search_Results_Example_20121219.jpg||border]]
  
'''Test:'''
+
To display the index.html of owasp.org as cached, the syntax is:
 
+
 
<pre>
 
<pre>
"Login to Webmin" inurl:10000
+
cache:owasp.org
 
</pre>
 
</pre>
 +
[[Image:Google_cache_Operator_Search_Results_Example_20121219.jpg||border]]
  
'''Result:'''
 
  
The query produces an output with every Webmin authentication interface collected by Google during the spidering process.
+
==== Google Hacking Database ====
  
'''Test:'''
+
The Google Hacking Database is list of useful search queries for Google. Queries are put in several categories:
 +
* Footholds
 +
* Files containing usernames
 +
* Sensitive Directories
 +
* Web Server Detection
 +
* Vulnerable Files
 +
* Vulnerable Servers
 +
* Error Messages
 +
* Files containing juicy info
 +
* Files containing passwords
 +
* Sensitive Online Shopping Info
  
<pre>
 
site:www.xxxx.org AND filetype:wsdl wsdl
 
</pre>
 
  
'''Result:'''
+
=== Gray Box Testing ===
 +
Gray Box testing is the same as Black Box testing above.
  
The filetype operator is used to find specific kind of files on the web-site.
 
  
'''How can you prevent Google hacking?'''
+
== Vulnerability References ==
+
'''Web'''<br>
Make sure you are comfortable with sharing everything in your public Web folder with the whole world, because Google will share it, whether you like it or not. Also, in order to prevent attackers from easily figuring out what server software you are running, change the default error messages and other identifiers. Often, when a "404 Not Found" error is detected, servers will return a page like that says something like:  
+
[1] "Google Basics: Learn how Google Discovers, Crawls, and Serves Web Pages" - https://support.google.com/webmasters/answer/70897 <br>
 +
[2] "Operators and More Search Help" - https://support.google.com/websearch/answer/136861?hl=en <br>
 +
[3] "Google Hacking Database" - http://www.exploit-db.com/google-dorks/ <br>
  
<pre>
 
Not Found
 
The requested URL /cgi-bin/xxxxxx was not found on this server.
 
Apache/1.3.27 Server at your web site Port 80
 
</pre>
 
  
The only information that the legimitate user really needs is a message that says "Page Not found." Restricting the other information will prevent your page from turning up in an attacker's search for a specific flavor of server.  
+
== Tools ==
Google periodically purges it's cache, but until then your sensitive files are still being offered to the public. If you realize that the search engine has cached files that you want to be unavailable to be viewed you can go to  http://www.google.com/remove.html  and follow the instructions on how to remove your page, or parts of your page, from their database.  
+
[4] FoundStone SiteDigger - http://www.mcafee.com/uk/downloads/free-tools/sitedigger.aspx <br>
 +
[5] Google Hacker - http://yehg.net/lab/pr0js/files.php/googlehacker.zip<br>
 +
[6] Stach & Liu's Google Hacking Diggity Project - http://www.stachliu.com/resources/tools/google-hacking-diggity-project/ <br>
 +
[7] PunkSPIDER - http://punkspider.hyperiongray.com/ <br>
  
'''Using a search engine to discover virtual hosts'''
 
 
Live.com, another well-known search engine (see link at the bottom of the page), provides the "ip" operator, which returns all the pages that are known to belong to a certain IP address. This is a very useful technique to find out which virtual hosts are configured on the tested server. For instance, the following query will return all indexed pages belonging to the domain owasp.org:
 
<pre>
 
ip:216.48.3.18
 
</pre>
 
  
== Gray Box testing and example ==  
+
== Remediation ==
Grey Box testing is the same as Black Box testing above
+
Carefully consider the sensitivity of design and configuration information before it is posted online.
  
== References ==
+
Periodically review the sensitivity of existing design and configuration information that is posted online.
'''Whitepapers'''<br>
+
"Against the System: Rise of the Robots" - Michal Zalewski - http://www.phrack.org/issues.html?issue=57&id=10#article<BR>
+
<BR>
+
'''Tools'''<br>
+
Google SOAP Search API - http://code.google.com/apis/soapsearch/<BR>
+
Google Hacking Database (GHDB) - http://johnny.ihackstuff.com/ghdb.php<BR>
+
GHDB Tool from GNUCITIZEN - http://www.gnucitizen.org/ghdb<BR>
+
Goolag from cDC - http://www.goolag.org/download.html
+
<br>
+

Revision as of 04:48, 13 May 2014

This article is part of the new OWASP Testing Guide v4.
Back to the OWASP Testing Guide v4 ToC: https://www.owasp.org/index.php/OWASP_Testing_Guide_v4_Table_of_Contents Back to the OWASP Testing Guide Project: https://www.owasp.org/index.php/OWASP_Testing_Project

Summary

There are direct and indirect elements to search engine discovery and reconnaissance. Direct methods relate to searching the indexes and the associated content from caches. Indirect methods relate to gleaning sensitive design and configuration information by searching forums, newsgroups, and tendering websites.


Once a search engine robot has completed crawling, it commences indexing the web page based on tags and associated attributes, such as <TITLE>, in order to return the relevant search results [1]. If the robots.txt file is not updated during the lifetime of the web site, and inline HTML meta tags that instruct robots not to index content have not been used, then it is possible for indexes to contain web content not intended to be included in by the owners. Website owners may use the previously mentioned robots.txt, HTML meta tags, authentication, and tools provided by search engines to remove such content.


Test Objectives

To understand what sensitive design and configuration information of the application/system/organization is exposed both directly (on the organization's website) or indirectly (on a third party website).


How to Test

Use a search engine to search for:

  • Network diagrams and configurations
  • Archived posts and emails by administrators and other key staff
  • Log on procedures and username formats
  • Usernames and passwords
  • Error message content
  • Development, test, UAT and staging versions of the website


Black Box Testing

Using the advanced "site:" search operator, it is possible to restrict search results to a specific domain [2]. Do not limit testing to just one search engine provider as they may generate different results depending on when they crawled content and their own algorithms. Consider using the following search engines:

  • Baidu
  • binsearch.info
  • Bing
  • Duck Duck Go
  • ixquick/Startpage
  • Google
  • Shodan
  • PunkSpider


Duck Duck Go and ixquick/Startpage provide reduced information leakage about the tester.

Google provides the Advanced "cache:" search operator [2], but this is the equivalent to clicking the "Cached" next to each Google Search Result. Hence, the use of the Advanced "site:" Search Operator and then clicking "Cached" is preferred.

The Google SOAP Search API supports the doGetCachedPage and the associated doGetCachedPageResponse SOAP Messages [3] to assist with retrieving cached pages. An implementation of this is under development by the OWASP "Google Hacking" Project.

PunkSpider is web application vulnerability search engine. It is of little use for a penetration tester doing manual work. However it can be useful as demonstration of easiness of finding vulnerabilities by script-kiddies.


Example

To find the web content of owasp.org indexed by a typical search engine, the syntax required is:

site:owasp.org

Google site Operator Search Results Example 20121219.jpg

To display the index.html of owasp.org as cached, the syntax is:

cache:owasp.org

Google cache Operator Search Results Example 20121219.jpg


Google Hacking Database

The Google Hacking Database is list of useful search queries for Google. Queries are put in several categories:

  • Footholds
  • Files containing usernames
  • Sensitive Directories
  • Web Server Detection
  • Vulnerable Files
  • Vulnerable Servers
  • Error Messages
  • Files containing juicy info
  • Files containing passwords
  • Sensitive Online Shopping Info


Gray Box Testing

Gray Box testing is the same as Black Box testing above.


Vulnerability References

Web
[1] "Google Basics: Learn how Google Discovers, Crawls, and Serves Web Pages" - https://support.google.com/webmasters/answer/70897
[2] "Operators and More Search Help" - https://support.google.com/websearch/answer/136861?hl=en
[3] "Google Hacking Database" - http://www.exploit-db.com/google-dorks/


Tools

[4] FoundStone SiteDigger - http://www.mcafee.com/uk/downloads/free-tools/sitedigger.aspx
[5] Google Hacker - http://yehg.net/lab/pr0js/files.php/googlehacker.zip
[6] Stach & Liu's Google Hacking Diggity Project - http://www.stachliu.com/resources/tools/google-hacking-diggity-project/
[7] PunkSPIDER - http://punkspider.hyperiongray.com/


Remediation

Carefully consider the sensitivity of design and configuration information before it is posted online.

Periodically review the sensitivity of existing design and configuration information that is posted online.