Test Network/Infrastructure Configuration (OTG-CONFIG-001)

[Up]

Proper configuration management of the web server infrastructure is very important in order to preserver the security of the application itself. If elements such as the web server software, the back-end database servers or the authentication servers are not properly reviewed and secured they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself.

For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users.

In order to test the configuration management infrastructure the following steps need to be taken:


 * the different elements that make up the infrastructure need to be determined in order to understand how they interact with web application and how they affect its security
 * all the elements of the infrastructure need to be reviewed in order to make sure that they don’t hold any known vulnerabilities
 * a review needs to be done of the administrative tools used to maintain all the different elements
 * the authentication systems, if any, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated to leverage access by external users.

Review of the application architecture
The application architecture needs to be reviewed through the test to determine what different components are used to build the web application. In small setups, such as a simple CGI-based application, a single server might be used that runs the web server which executes the C, Perl, or Shell CGIs application and maybe authentication is also based on the web server authentication mechanisms. On more complex setups, such as an online bank system, multiple servers might be involved including: a reverse proxy, a front-end web server, an application server and a database server or LDAP server. Each of these servers will be used for different purposes and might be even be divided in different networks with firewalling devices between them creating different DMZs so that access to the web server will not grant a remote user access to the authentication mechanism itself and compromises of the different elements of the architecture can be isolated in a way such that they will not compromise the whole architecture.

Getting knowledge of the application architecture can be easy, if this information is provided to the testing team by the application developers in document form or through interviews, or can prove to be very difficult to determine if doing a blind penetration test.

In the later case, a tester will first start with the assumption that there is a simple setup (a single server) and will, through the information retrieved from other tests will derive the different elements and question this assumption the architecture will be extended. He will start by making simple questions such as: “Is there a firewalling system protecting the web server?” which will be answered based on the results of network scans targeted at the web server and the analysis of whether the network ports of the web server are being filtered in the network edge (no answer or ICMP unreachables are received) or if the server is directly connected to the Internet (i.e. returns RST packets for all non-listening ports). This analysis can be enhanced in order to determine the type of firewall system used based on network packet tests: is it a stateful firewall or is it an access list filter on a router? How is it configured? Can it be bypassed?

Detecting a reverse proxy in front of the web server needs to be done by the analysis of the web server banner which might directly disclose the existence of a reverse proxy (for example, if ‘WebSEAL’[1] is returned). It can also be determined by the answers of the web server to requests and comparing them to the expected answers. For example, some reverse proxies act as “intrusion prevention systems” (or web-shields) by blocking known attacks targeted at the web server. If the web server is known to answer with a 404 message to a request which targets and unavailable page and returns a different error message for some common web attacks like those done by CGI scanners it might be an indication of a reverse proxy (or an application-level firewall) which is filtering the requests and returning a different error page than the one expected. Another example, if the web server returns a set of available HTTP methods (including TRACE) but then the expected methods return errors then there probably is something in between blocking them. And, in some cases, even the protection system gives itself away:

GET / web-console/ServerInfo.jsp%00 HTTP/1.0

HTTP/1.0 200 Pragma: no-cache Cache-Control: no-cache Content-Type: text/html Content-Length: 83

Error  Error FW-1 at XXXXXX: Access denied. Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server

Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header, or timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.

Other elements that can be detected are network balancers. Typically, these systems will be balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done based on multiple requests and comparing results in order to determine if the requests are going to the same or different web servers, for example, based on the Date: header if the server clocks are not synchronised. In some cases the network load balance might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.

Application web servers are usually easy to detect. Sometimes because the request for several resources is handled by the application server itself and not the web server and the response header will vary significantly (including different or additional values in the answer header). Another possibility to detect these is if the web servers tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or rewrite URLs automatically to do session tracking.

Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way since they will be hidden by the application itself.

The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly” it is probably being extracted from some sort of database by the application itself. Sometimes even the way information is requested might give insight to an existence of a database back-end, for example, an online shopping applications that uses numeric identifiers (‘id’) when browsing the different articles in the shop. However, when doing a blind application test knowledge of the underlying database is usually only available when some vulnerability surfaces in the application, such as an SQL injection, which indicates that the application is actually talking to a database (or the vulnerability would not be possible otherwise).

Known server vulnerabilities
Vulnerabilities found in the different elements that make up the application architecture, be it the web server itself or the database backend can severely compromise the application itself in some cases even more if a vulnerability had been found in the application itself. For example, consider a server vulnerability that allows a remote, unauthenticated user, to upload files to the web server even replacing existing files, this vulnerability would compromise the application itself since a rogue user would be able to replace the application itself or introduce code that would affect the backend servers since its application code would be run just like any other application.

Reviewing server vulnerabilities can be either hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to tested from a remote site, typically using an automated tool, however, the test of some vulnerabilities can have unpredictable results to the web server or testing for some kinds of vulnerabilities (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful. Also, some automated tools will flag vulnerabilities based on the web server version retrieved. This leads both to false positives and false negatives: on one hand, if the web server version has been removed or obscured by the local site administrator the scan tool will not flag the server as vulnerable even if it, on the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed in it the scan tool will flag vulnerabilities that do not exist. The later case is actually very common in some operating system vendors that do backport patches of security vulnerabilities to the software they provide in the operating system but do not do a full upload to the latest software version, this happens, for example, in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed” elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication backends, the database backends or, even, reverse proxies in use.

Finally, not all software vendors disclose vulnerability information in public way and, even, information of the vulnerabilities present in their different releases is not published in vulnerability databases[2] but is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for less known products.

This is why reviewing vulnerabilities is best done when the tester is provided internal information of the software used including versions and releases used and patches applied to the software. Which this information in its hand, the tester can retrieve the information from the vendor itself and analyse what vulnerabilities might be present in the architecture and how they can affect the application itself. All these vulnerabilities can, when possible be tested in order to determine their real effects and detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of exploiting these vulnerabilities. Testers might even determine, through a configuration review, that the vulnerability is not even present since it affects a software component that is not in use.

It is also worthwhile noticing that vendors will sometimes silently fix vulnerabilities and make them available on new software releases. Also, different vendors will have difference release cycles that determines the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if a vulnerability were to surface in an old software version that is no longer under support the systems personnel might not be directly aware of it: no patches will be ever made available for it and advisories might not list that version as vulnerable (as it is unsupported). And even in the event that they might be aware that the vulnerability is present and the system is, indeed, vulnerable, they will need to do a full upgrade to a new software release which might introduce significant downtime in the application architecture or might force the application to be recoded due to incompatibilities with the latest software version.

Administrative tools
Any web server infrastructure requires the existence of administrative tools to maintain and update the information used by the application: static content (web pages, graphic files), applications source code, user authentication databases, etc. Depending on the site, technology or software used administrative tools will be differ. For example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such as when using the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case[3]) or use operating-system GUI tools (when using Microsoft’s IIS server or ASP.Net). In most cases, however, the server configuration will be handled using different tools than the maintenance of the files used by the web server, which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously operating system of the elements that make up the application architecture will also be managed using other tools. Also, applications might have administrative interfaces embedded in them that are used to manage the application data itself (users, content, etc.)

Review of the administrative interfaces used to manage the different parts of the architecture is very important since if a user gains access to any of them he can compromise or damage the application architecture. Thus it is important to:


 * list all the possible administrative interfaces.
 * determine if administrative interfaces are available only from an internal network or are also available from the Internet.
 * if available from the Internet, determine what are the access control methods used to access these interfaces and if they are susceptible to attacks.

Some sites do not directly manage the web server applications fully, they might have other companies manage the content provided by the web server application. This external companies might either provide only parts of the content (news updates or promotions) or might manage the web server completely including content and code. It is common to find administrative interfaces be available from the Internet in these situations, since using the Internet, as the web servers are directly connected to it anyway, is cheaper than providing a dedicated line that will connect the external company to the application infrastructure through a management-only interface. In this situation it is very important to test if the administrative interfaces can be vulnerable to attacks.

Authentication back-ends
Many applications rely heavily on the authentication methods implemented to provide information only to the authorised user and no other user. In some cases, like in a merchant shop, the information might be the same (the history of items bought in the shop and the user profile) but it should only be viewed by the legitimate user. In other cases, like an internal human resources application, different users will have different roles that determine what actions or functionality is available to them in the application.

It is important to review and test the security of the authentication back-end to determine that the information they store cannot be recovered by any means. This means ensuring that the authentication information is stored in encrypted form, specially the passwords, if any, used by users to access the application[5]. Of course, backups of the authentication system should also be kept encrypted to prevent disclosure of this sensible information in the event of loss.

Review user’s application privileges? Review default users? Admin-level and user-level access use same authentication back-end?