Difference between revisions of "Testing: Review Webserver Metafiles for Information Leakage (OTG-INFO-003)"

From OWASP
Jump to: navigation, search
(1st Draft - OWASP Testing Guide v3)
(1st Draft - OWASP Testing Guide v3)
Line 10: Line 10:
 
== Description of the Issue ==  
 
== Description of the Issue ==  
 
<br>
 
<br>
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. While their accepted behavior is specified by the web server and its web pages they may accidentally or intentionally retrieve web content not intended to be stored or published.
+
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory.
 +
 
 +
Web spiders/robots/crawlers can intentionally ignore the "Disallow:" statement[s] of the robots.txt file.  Hence, robots.txt should not be considered to control access to web content not intended to be stored or published by external parties.  
 
<br>
 
<br>
  

Revision as of 23:51, 18 July 2008

OWASP Testing Guide v3 Table of Contents

This article is part of the OWASP Testing Guide v3. The entire OWASP Testing Guide v3 can be downloaded here.

OWASP at the moment is working at the OWASP Testing Guide v4: you can browse the Guide here

Contents


This is a draft of a section of the new Testing Guide v3

Brief Summary


This section describes how to test the robots.txt file.

Description of the Issue


Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the Robots Exclusion Protocol of the robots.txt file in the web root directory.

Web spiders/robots/crawlers can intentionally ignore the "Disallow:" statement[s] of the robots.txt file. Hence, robots.txt should not be considered to control access to web content not intended to be stored or published by external parties.

Black Box testing and example

Description and goal

Our goal is to create a map of the application with all the points of access (gates) to the application. This will be useful for the second active phase of penetration testing. You can use a tool such as wget (powerful and very easy to use) to retrieve all the information published by the application.


Test:

The -S option is used to collect the HTTP header of the web requests. The --spider options is used to not download anything since we only want the HTTP header.

wget -S --spider <target>


Result:

http://www.<target>/
           => `index.html'
Resolving www.<target>... 64.xxx.xxx.23, 64.xxx.xxx.24, 64.xxx.xxx.20, ...
Connecting to www.<target>|64.xxx.xxx.23|:80... connected.
HTTP request sent, awaiting response... 
  HTTP/1.1 200 OK
  Date: Mon, 10 Sep 2007 00:43:04 GMT
  Server: Apache
  Accept-Ranges: bytes
  Cache-Control: max-age=60, private
  Expires: Mon, 10 Sep 2007 00:44:01 GMT
  Vary: Accept-Encoding,User-Agent
  Content-Type: text/html
  X-Pad: avoid browser bug
  Content-Length: 135750
  Keep-Alive: timeout=5, max=64
  Connection: Keep-Alive
Length: 135,750 (133K) [text/html]
200 OK

Test:

The -r option is used to collect recursively the web-site's content and the -D option restricts the request only for the specified domain.

wget -r -D <domain> <target>

Result:

22:13:55 (15.73 KB/s) - `www.******.org/indice/13' saved [8379]

--22:13:55--  http://www.******.org/*****/********
           => `www.******.org/*****/********'
Connecting to www.******.org[xx.xxx.xxx.xx]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]

    [   <=>                                                                                                                                                                ] 11,308        17.72K/s                     

...



Gray Box testing and example

The process is the same as Black Box testing above.

References

Whitepapers


Tools