Difference between revisions of "Testing: Conduct search engine discovery/reconnaissance for information leakage (OTG-INFO-001)"

From OWASP
Jump to: navigation, search
m (Replaced new line character at the end of each line for each item listed)
(42 intermediate revisions by 10 users not shown)
Line 1: Line 1:
{{Template:OWASP Testing Guide v3}}
+
{{Template:OWASP Testing Guide v4}}
  
'''This is a draft of a section of the new Testing Guide v3'''
+
== Summary ==
 +
There are direct and indirect elements to Search engine discovery and reconnaissance. Direct methods relate to searching the Google Index and remove the associated web content from the Google Cache. Indirect methods relate to gleaning sensitive design and configuration information by searching forums, newsgroups and tendering websites.
  
== Brief Summary ==
+
Once the GoogleBot has completed crawling, it commences indexing the web page based on tags and associated attributes, such as <TITLE>, in order to return the relevant search results. [1]
<br>
+
This section describes how to retrieve information about the application being tested from the Google Cache and other Search Engines such as Live, Yahoo!, etc.
+
<br>
+
  
== Description of the Issue ==
+
If the robots.txt file is not updated during the lifetime of the web site, then it is possible for web content not intended to be included in Google's Search Results to be returned.
<br>
+
The Google search engine found at http://www.google.com offers many features, including language and document translation; web, image, newsgroups, catalog, and news searches; and more. These features offer obvious benefits to even the most uninitiated web surfer, but these same features offer far more nefarious possibilities to the most malicious Internet users, including hackers, computer criminals, identity thieves, and even terrorists. This article outlines the more harmful applications of the Google search engine, techniques that have collectively been termed "Google Hacking." In 1992, there were about 15,000 web sites, in 2006 the number has exceeded 100 million. What if a simple query to a search engine like Google such as "Hackable Websites w/ Credit Card Information" produced a list of websites that contained customer credit card data of thousands of customers per company? If the attacker is aware of a web application that stores a clear text password file in a directory and wants to gather these targets, then he could search on "intitle:"Index of" .mysql_history" and the search engine will provide him with a list of target systems that may divulge these database usernames and passwords (out of a possible 100 million web sites available). Or perhaps the attacker has a new method to attack a Lotus Notes web server and simply wants to see how many targets are on the internet, he could search on "inurl:domcfg.nsf". Apply the same logic to a worm looking for its new victim.  
+
<br>
+
  
== Black Box testing and example ==
+
Therefore, it must be removed from the Google Cache.
'''Description and goal'''
+
  
The scope of this activity is to find information about a single web site published on the internet or to find a specific kind of application such as Webmin or VNC.
+
== Test Objectives ==
There are tools available that can assist with this technique, for example googlegath, but it is also possibile to perform this operation manually using Google's web site search facilities.  This operation does not require specialist technical skills and is a good way to collect information about a web target. 
+
  
 +
To understand what sensitive design and configuration information is exposed of the application/system/organisation both directly (on the organisation's website) or indirectly (on a third party website)
  
'''Useful Google Advanced Search techniques '''
+
== How to Test ==
  
* Use the plus sign (+) to force a search for an overly common word. Use the minus sign (-) to exclude a term from a search. No spaces follow these signs.
+
Using a search engine, search for:
* To search for a phrase, supply the phrase surrounded by double quotes (" ").
+
* Network diagrams and configurations
* A period (.) serves as a single-character wildcard.
+
* Archived posts and emails by administrators and other key staff
* An asterisk (*) represents any word —- not the completion of a word, as is traditionally used.
+
* Logon procedures and username formats
 +
* Usernames and passwords
  
Google advanced operators help refine searches. Advanced operators use the following syntax: operator:search_term . Notice that there is no space between the operator, the colon, and the search term. A list of operators and search terms follows:
+
=== Black Box Testing ===
* The ''site'' operator instructs Google to restrict a search to a specific web site or domain. The web site to search must be supplied after the colon.
+
Using the advanced "site:" search operator, it is possible to restrict Search Results to a specific domain [2].
* The ''filetype'' operator instructs Google to search only within the text of a particular type of file. The file type to search must be supplied after the colon. Don't include a period before the file extension.
+
* The ''link'' operator instructs Google to search within hyperlinks for a search term.
+
* The ''cache'' operator displays the version of a web page as it appeared when Google crawled the site. The URL of the site must be supplied after the colon.
+
* The ''intitle'', ''allintitle'' operator instructs Google to search for a term within the title of a document.
+
* The ''inurl'', ''allinurl'' operator instructs Google to search only within the URL (web address) of a document. The search term must follow the colon.
+
* The ''info'' operator instructs Google to search only within the summary information of a site
+
* The ''phonebook'' operator instructs Google to search business or residential phone listing.
+
* The ''stocks'' operator instructs Google to search for stock market information about a company.
+
* The ''bphonebook'' operator instructs Google to search business phone listing only.
+
The following are a set googling examples (for a complete list look at [1]):
+
  
'''Test:'''
+
Google provides the Advanced "cache:" search operator [2], but this is the equivalent to clicking the "Cached" next to each Google Search Result.  Hence, the use of the Advanced "site:" Search Operator and then clicking "Cached" is preferred.
  
<pre>
+
The Google SOAP Search API supports the doGetCachedPage and the associated doGetCachedPageResponse SOAP Messages [3] to assist with retrieving cached pages. An implementation of this is under development by the [[::Category:OWASP_Google_Hacking_Project |OWASP "Google Hacking" Project]].
site:www.xxxxx.ca AND intitle:"index.of" "backup"
+
</pre>
+
 
+
'''Result:'''
+
 
+
The operator site: restricts a search in a specific domain, while the intitle: operator makes it possibile to find the pages that contain "index of backup" as a link title of the Google output.<br>
+
The AND boolean operator is used to combine more conditions in the same query.
+
  
 +
==== Example ====
 +
To find the web content of owasp.org indexed by Google Cache the following Google Search Query is issued:
 
<pre>
 
<pre>
Index of /backup/
+
site:owasp.org
 
+
Name                    Last modified      Size  Description
+
 
+
Parent Directory        21-Jul-2004 17:48      - 
+
 
</pre>
 
</pre>
 +
[[Image:Google_site_Operator_Search_Results_Example_20121219.jpg]]
  
'''Test:'''
+
To display the index.html of owasp.org as cached by Google the following Google Search Query is issued:
 
+
 
<pre>
 
<pre>
"Login to Webmin" inurl:10000
+
cache:owasp.org
 
</pre>
 
</pre>
 +
[[Image:Google_cache_Operator_Search_Results_Example_20121219.jpg]]
  
'''Result:'''
+
=== Gray Box testing and example ===
 +
Grey Box testing is the same as Black Box testing above.
  
The query produces an output with every Webmin authentication interface collected by Google during the spidering process.
+
== Tools ==
 +
[1] FoundStone SiteDigger - http://www.mcafee.com/uk/downloads/free-tools/sitedigger.aspx <br>
 +
[2] Google Hacker - http://yehg.net/lab/pr0js/files.php/googlehacker.zip<br>
 +
[3] Stach & Liu's Google Hacking Diggity Project - http://www.stachliu.com/resources/tools/google-hacking-diggity-project/ <br>
  
'''Test:'''
+
== Vulnerability References ==
 
+
'''Web'''<br>
<pre>
+
[1] "Google Basics: Learn how Google Discovers, Crawls, and Serves Web Pages" - http://www.google.com/support/webmasters/bin/answer.py?answer=70897 <br>
site:www.xxxx.org AND filetype:wsdl wsdl
+
[2] "Operators and More Search Help" - http://support.google.com/websearch/bin/answer.py?hl=en&answer=136861 <br>
</pre>
+
 
+
'''Result:'''
+
 
+
The filetype operator is used to find specific kind of files on the web-site.
+
 
+
'''How can you prevent Google hacking?'''
+
+
Make sure you are comfortable with sharing everything in your public Web folder with the whole world, because Google will share it, whether you like it or not. Also, in order to prevent attackers from easily figuring out what server software you are running, change the default error messages and other identifiers. Often, when a "404 Not Found" error is detected, servers will return a page like that says something like:  
+
 
+
<pre>
+
Not Found
+
The requested URL /cgi-bin/xxxxxx was not found on this server.
+
Apache/1.3.27 Server at your web site Port 80
+
</pre>
+
 
+
The only information that the legimitate user really needs is a message that says "Page Not found." Restricting the other information will prevent your page from turning up in an attacker's search for a specific flavor of server.
+
Google periodically purges it's cache, but until then your sensitive files are still being offered to the public. If you realize that the search engine has cached files that you want to be unavailable to be viewed you can go to  http://www.google.com/remove.html  and follow the instructions on how to remove your page, or parts of your page, from their database.
+
 
+
'''Using a search engine to discover virtual hosts'''
+
 
+
Live.com, another well-known search engine (see link at the bottom of the page), provides the "ip" operator, which returns all the pages that are known to belong to a certain IP address. This is a very useful technique to find out which virtual hosts are configured on the tested server. For instance, the following query will return all indexed pages belonging to the domain owasp.org:
+
<pre>
+
ip:216.48.3.18
+
</pre>
+
  
== Gray Box testing and example ==  
+
== Remediation ==
Grey Box testing is the same as Black Box testing above
+
Carefully consider the sensitivity of design and configuration information before it is posted online.
  
== References ==
+
Periodically review the sensitivity of existing design and configuration information that is posted online.
'''Whitepapers'''<br>
+
"Against the System: Rise of the Robots" - Michal Zalewski - http://www.phrack.org/issues.html?issue=57&id=10#article<BR>
+
<BR>
+
'''Tools'''<br>
+
Google SOAP Search API - http://code.google.com/apis/soapsearch/<BR>
+
Google Hacking Database (GHDB) - http://johnny.ihackstuff.com/ghdb.php<BR>
+
GHDB Tool from GNUCITIZEN - http://www.gnucitizen.org/ghdb<BR>
+
Goolag from cDC - http://www.goolag.org/download.html
+
<br>
+

Revision as of 07:18, 21 July 2013

This article is part of the new OWASP Testing Guide v4. 
At the moment the project is in the REVIEW phase.

Back to the OWASP Testing Guide v4 ToC: https://www.owasp.org/index.php/OWASP_Testing_Guide_v4_Table_of_Contents Back to the OWASP Testing Guide Project: http://www.owasp.org/index.php/OWASP_Testing_Project

Contents


Summary

There are direct and indirect elements to Search engine discovery and reconnaissance. Direct methods relate to searching the Google Index and remove the associated web content from the Google Cache. Indirect methods relate to gleaning sensitive design and configuration information by searching forums, newsgroups and tendering websites.

Once the GoogleBot has completed crawling, it commences indexing the web page based on tags and associated attributes, such as <TITLE>, in order to return the relevant search results. [1]

If the robots.txt file is not updated during the lifetime of the web site, then it is possible for web content not intended to be included in Google's Search Results to be returned.

Therefore, it must be removed from the Google Cache.

Test Objectives

To understand what sensitive design and configuration information is exposed of the application/system/organisation both directly (on the organisation's website) or indirectly (on a third party website)

How to Test

Using a search engine, search for:

  • Network diagrams and configurations
  • Archived posts and emails by administrators and other key staff
  • Logon procedures and username formats
  • Usernames and passwords

Black Box Testing

Using the advanced "site:" search operator, it is possible to restrict Search Results to a specific domain [2].

Google provides the Advanced "cache:" search operator [2], but this is the equivalent to clicking the "Cached" next to each Google Search Result. Hence, the use of the Advanced "site:" Search Operator and then clicking "Cached" is preferred.

The Google SOAP Search API supports the doGetCachedPage and the associated doGetCachedPageResponse SOAP Messages [3] to assist with retrieving cached pages. An implementation of this is under development by the OWASP "Google Hacking" Project.

Example

To find the web content of owasp.org indexed by Google Cache the following Google Search Query is issued:

site:owasp.org

Google site Operator Search Results Example 20121219.jpg

To display the index.html of owasp.org as cached by Google the following Google Search Query is issued:

cache:owasp.org

Google cache Operator Search Results Example 20121219.jpg

Gray Box testing and example

Grey Box testing is the same as Black Box testing above.

Tools

[1] FoundStone SiteDigger - http://www.mcafee.com/uk/downloads/free-tools/sitedigger.aspx
[2] Google Hacker - http://yehg.net/lab/pr0js/files.php/googlehacker.zip
[3] Stach & Liu's Google Hacking Diggity Project - http://www.stachliu.com/resources/tools/google-hacking-diggity-project/

Vulnerability References

Web
[1] "Google Basics: Learn how Google Discovers, Crawls, and Serves Web Pages" - http://www.google.com/support/webmasters/bin/answer.py?answer=70897
[2] "Operators and More Search Help" - http://support.google.com/websearch/bin/answer.py?hl=en&answer=136861

Remediation

Carefully consider the sensitivity of design and configuration information before it is posted online.

Periodically review the sensitivity of existing design and configuration information that is posted online.