Testing: Spidering and googling

[Up]

Brief Summary
Web spiders are the most powerful and useful tools developed for both good and bad intentions on the Internet. A spider serves one major function, Data Mining. The way a typical spider (like Google) works is by crawling a website one page at a time and gathering and storing the relevant information such as email address, meta-tags, hidden form data, URL information, links and so much more. Then the spider crawls all the links in that page, collecting relevant information in each following page, and so on. Before you know it the spider has crawled thousands of links and pages gathering bits of information and storing into a database. This web of paths is where the term 'spider' is derived from.

The Google search engine found at http://www.google.com offers many features, including language and document translation; web, image, newsgroups, catalog, and news searches; and more. These features offer obvious benefits to even the most uninitiated web surfer, but these same features offer far more nefarious possibilities to the most malicious Internet users, including hackers, computer criminals, identity thieves, and even terrorists. This article outlines the more harmful applications of the Google search engine, techniques that have collectively been termed "Google Hacking."

Description of the Issue
In 1992, there was about 15,000 websites, now in 2006 the number has exceeded 100 million. What if a simply query to a search engine such as Google such as "Hackable Websites w/ Credit Card Information" produced a list of websites that contained customer credit card data of thousands of customers per company.

If the attacker was aware of a web application that perhaps utilized a clear text password file in a directory and wanted to gather these targets you could search on "intitle:"Index of" .mysql_history" and found on any of the 100 million websites will provide you with a list of the database usernames and passwords OR maybe the attacker has a new method to attack a Lotus Notes web server and he wants to simply see how many targets are on the Internet, he could search "inurl:domcfg.nsf". Apply the same logic to a worm looking for its new victim.

 ADVANCED SEARCH 101 w/Google

Use the plus sign (+) to force a search for an overly common word. Use the minus sign (-) to exclude a term from a search. No space follows these signs.

To search for a phrase, supply the phrase surrounded by double quotes (" ").

A period (.) serves as a single-character wildcard.

An asterisk (*) represents any word—not the completion of a word, as is traditionally used.

Google advanced operators help refine searches. Advanced operators use a syntax such as the following:

operator:search_term

Notice that there's no space between the operator, the colon, and the search term.

The site: operator instructs Google to restrict a search to a specific web site or domain. The web site to search must be supplied after the colon.

The filetype: operator instructs Google to search only within the text of a particular type of file. The file type to search must be supplied after the colon. Don't include a period before the file extension.

The link: operator instructs Google to search within hyperlinks for a search term.

The cache: operator displays the version of a web page as it appeared when Google crawled the site. The URL of the site must be supplied after the colon.

The intitle: operator instructs Google to search for a term within the title of a document.

The inurl: operator instructs Google to search only within the URL (web address) of a document. The search term must follow the colon.

Site Mapping

To find every web page Google has crawled for a specific site, use the site: operator. Consider the following query: site:http://www.owasp.org or if you wanted to see who links to the OWASP webpage you could do a link:http://www.owasp.org

Black Box testing and example
Test:

The -s option is used to collect the HTTP headers of the web requests.

wget -s

Result:

HTTP/1.1 200 OK Date: Tue, 12 Dec 2006 20:46:39 GMT Server: Apache/1.3.37 (Unix) mod_jk/1.2.8 mod_deflate/1.0.21 PHP/5.1.6 mod_auth_ passthrough/1.8 mod_log_bytes/1.2 mod_bwlimited/1.4 FrontPage/5.0.2.26 34a mod_ssl/2.8.28 OpenSSL/0.9.7a X-Powered-By: PHP/5.1.6 Set-Cookie: PHPSESSID=b7f5c903f8fdc254ccda8dc33651061f; expires=Friday, 05-Jan-0 7 00:19:59 GMT; path=/ Expires: Sun, 19 Nov 1978 05:00:00 GMT Last-Modified: Tue, 12 Dec 2006 20:46:39 GMT Cache-Control: no-store, no-cache, must-revalidate Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Connection: close Content-Type: text/html; charset=utf-8

Test:

The -r option is used for a recursive collecting of the web site's contents and the -D option restricts the request only for the specified domain.

wget -r -D

Result:

22:13:55 (15.73 KB/s) - `www.******.org/indice/13' saved [8379]

--22:13:55-- http://www.******.org/*****/******** => `www.******.org/*****/********' Connecting to www.******.org[xx.xxx.xxx.xx]:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html]

[  <=>                                                                                                                                                                ] 11,308        17.72K/s

...

Grey Box testing and example
Testing for Topic X vulnerabilities: -INPROGRESS PLACEHOLDER ... Result Expected: ...