Conduct search engine discovery/reconnaissance for information leakage (OTG-INFO-001)

This is a draft of a section of the new Testing Guide v3

Brief Summary
This section describes how to search the Google Index and remove the associated web content from the Google Cache.

Description of the Issue
Once GoogleBot has completed crawling, it commences indexing the web page based on tags and associated attributes, such as , in order to return the relevant search results. [1]

If the robots.txt file is not updated during the lifetime of the web site, then it is possible for web content not intended to be included in Google's Search Results to be returned.

Therefore, it must be removed from the Google Cache.

Black Box Testing
Using the advanced "site:" search operator, it is possible to restrict Search Results to a specific domain [2].

Google provides the Advanced "cache:" search operator [2], but this is the equivalent to clicking the "Cached" next to each Google Search Result. Hence, the use of the Advanced "site:" Search Operator and then clicking "Cached" is preferred.

The Google SOAP Search API supports the doGetCachedPage and the associated doGetCachedPageResponse SOAP Messages [3] to assist with retrieving cached pages. An implementation of this is under development by the OWASP "Google Hacking" Project

Example
To find the web content of owasp.org indexed by Google Cache the following Google Search Query is issued: site:owasp.org

To display the index.html of owasp.org as cached by Google the following Google Search Query is issued: cache:owasp.org

Remediation
For urgent removal, Google provides the "URL Removal" function as part of the "Google Webmaster Tools" service [4].

Please, note that this still requires you to modify the robots.txt file within the web root directory to "Disallow:" this content from appearing again in the Google Search Results after 90 days.

Gray Box testing and example
Grey Box testing is the same as Black Box testing above