Review Webserver Metafiles for Information Leakage (OTG-INFO-003)

Summary
This section describes how to test the robots.txt file for Information Leakage of the web application's directory/folder path(s). Furthermore the list of directories that are to be avoided by Spiders/Robots/Crawlers can also be created as a dependency for OWASP-IG-009

Test Objectives
1. Information Leakage of the web application's directory/folder path(s).

2. Create the list of directories that are to be avoided by Spiders/Robots/Crawlers

How to Test
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the Robots Exclusion Protocol of the robots.txt file in the web root directory [1].

As an example, the beginning of the robots.txt file from http://www.google.com/robots.txt sampled on 11 August 2013 is quoted below: User-agent: * Disallow: /search Disallow: /sdch Disallow: /groups Disallow: /images Disallow: /catalogs ...

The User-Agent directive refers to the specific web spider/robot/crawler. For example the User-Agent: Googlebot refers to the GoogleBot crawler while User-Agent: * in the example above applies to all web spiders/robots/crawlers [2] as quoted below: User-agent: *

The Disallow directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited: ... Disallow: /search Disallow: /sdch Disallow: /groups Disallow: /images Disallow: /catalogs ... Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3]. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.

Black Box testing and example
wget The robots.txt file is retrieved from the web root directory of the web server.

For example, to retrieve the robots.txt from www.google.com using wget or "curl": cmlh$ wget http://www.google.com/robots.txt --2013-08-11 14:40:36-- http://www.google.com/robots.txt Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ... Connecting to www.google.com|74.125.237.17|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/plain] Saving to: ‘robots.txt.1’

[ <=>                                  ] 7,074       --.-K/s   in 0s

2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]

cmlh$ head -n5 robots.txt User-agent: * Disallow: /search Disallow: /sdch Disallow: /groups Disallow: /images cmlh$ cmlh$ curl -O http://www.google.com/robots.txt % Total   % Received % Xferd  Average Speed   Time    Time     Time  Current Dload Upload   Total   Spent    Left  Speed 101 7074    0  7074    0     0   9410      0 --:--:-- --:--:-- --:--:-- 27312

cmlh$ head -n5 robots.txt User-agent: * Disallow: /search Disallow: /sdch Disallow: /groups Disallow: /images cmlh$ Speculum "Speculum" automates the creation of the initial scope for Spiders/Robots/Crawlers of files and directories/folders of a web site

For example, to create the initial scope based on the Allowed: directive from www.google.com using "Speculum": cmlh$ ./speculum.pl -www www.google.com

"Speculum" Alpha v0.0_2

Copyright 2013 Christian Heinrich Licensed under the Apache License, Version 2.0

1. Downloading http://www.google.com/robots.txt 2. "robots.txt" saved as "www.google.com-robots.txt" 3. Sending Allow: URIs of www.google.com to web proxy i.e. 127.0.0.1:8080 /catalogs/about sent /catalogs/p? sent /news/directory sent ... 4. Done.

cmlh$

Analyze robots.txt using Google Webmaster Tools Google provides an "Analyze robots.txt" function as part of its "Google Webmaster Tools", which can assist with testing [4] and the procedure is as follows:

1. Sign into Google Webmaster Tools with your Google Account. 2. On the Dashboard, click the URL for the site you want. 3. Click Tools, and then click Analyze robots.txt.

Gray Box testing and example
The process is the same as Black Box testing above.

Tools

 * Browser (View Source function)
 * curl
 * wget