Review Webserver Metafiles for Information Leakage (OTG-INFO-003)
This article is part of the OWASP Testing Guide v3. The entire OWASP Testing Guide v3 can be downloaded here.
OWASP at the moment is working at the OWASP Testing Guide v4: you can browse the Guide here
This is a draft of a section of the new Testing Guide v3
This section describes how to test the robots.txt file.
Description of the Issue
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the Robots Exclusion Protocol of the robots.txt file in the web root directory .
Within the robots.txt file, User-Agent: refers to the specific web spider/robot/crawler i.e. User-Agent: Google refers to the GoogleBot while User-Agent: * applies to all web spiders/robots/crawlers.
The Disallow: statement within the robots.txt file specifies where the web spiders/robots/crawlers should *not* to recursively retrieve web pages e.g. Disallow: /cgi-bin/ refers to the /cgi-bin directory and sub-directories.
Web spiders/robots/crawlers can intentionally ignore the Disallow: statement[s] of the robots.txt file . Hence, robots.txt should not be considered to control access to web content not intended to be stored or published by external parties.
Black Box testing and example
The robots.txt file is retrieved by from the web root directory of the web server. For example, the URL "http://www.google.com/robots.txt" is the robots.txt file of www.google.com
To retrieve the robots.txt from www.google.com using wget:
$ wget http://www.google.com/robots.txt --23:59:24-- http://www.google.com/robots.txt => 'robots.txt' Resolving www.google.com... 22.214.171.124, 126.96.36.199, 188.8.131.52, ... Connecting to www.google.com|184.108.40.206|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/plain] [ <=> ] 3,425 --.--K/s 23:59:26 (13.67MB/s) - 'robots.txt' saved 
Google provide an "Analyze robots.txt" function as part of its "Google Webmaster Tools" which can assist with testing.
Gray Box testing and example
The process is the same as Black Box testing above.
-  "The Web Robots Pages" - http://www.robotstxt.org/
-  "How Google crawls my site" - http://www.google.com/support/webmasters/bin/topic.py?topic=8843
-  "(ISC)2 Blog: The Attack of the Spiders from the Clouds" - http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html
-  "How do I check that my robots.txt file is working as expected?" - http://www.google.com/support/webmasters/bin/answer.py?answer=35237