Cover Your Assets, Web Style

To keep your company's Internet-facing data safe, you must determine how much protection is needed for each of your assets.

July 8, 2002

14 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Determine necessary information including

• Who needs access to what services, applications and data.

• From where must the access come, outside or inside?

• When does the access need to be available? Business hours? After hours?

• What information are you trying to protect? You need to know what you're trying to protect before you can put policies into place. This should be a detailed analysis. If you have multiple business partners and customers, you'll need to ensure customer separation so customer X can't access information confidential to customer Y.There are plenty of models you can use to create these policies, including the Biba Integrity Model and the Clark-Wilson Integrity Model. Even if you don't subscribe to a particular methodology, however, be sure the policy is documented and accessible to the technologists in your organization who will need to reference it during implementation and deployment.

Once the policy is in place, schedule regular reviews of the policy and the tools that assist in securing your infrastructure. At a minimum, review your policies and your effectiveness in enforcing those policies once a quarter.

SSL To the Rescue?

Some companies mistakenly believe that SSL (Secure Sockets Layer) will mitigate all security risks. Although SSL is great for ensuring that no one eavesdrops on your business transactions, unless you're using client certificates as a part of your SSL strategy, you aren't gaining much except a security blanket for your transactions while they're in transit. Client-side certificates provide a more credible authentication scheme than user-name and password combinations do. Requiring client certificates for access makes it more difficult for someone to impersonate an authorized user, providing a higher level of security by ensuring that you know who is attempting to access your data (for more on SSL connections, see Featue "The Anatomy of an SSL Handshake").SSL-based communication can inhibit other parts of your security strategy. Most IDS (intrusion-detection system) and virus-scanning services can't interact with SSL, so encrypted traffic is allowed to pass without question. What's needed is a slightly different architecture, one that provides a mechanism that allows encrypted traffic to be inspected by your edge-protection devices before it's passed further into your back-end infrastructure. Two physical connections are made for every logical connection. The client connects to the device, and a separate connection from the device to the server is made to fulfill any client requests.

Such an architecture is often called a co-appliance or side-arm configuration. By using a traffic-management device--usually a load-balancer--you can provide a method by which you can follow those security policies you so painstakingly created (see graphic "Web Infrastructure in a Side-Arm Configuration").

In this configuration, SSL-encrypted traffic is decrypted by first directing the traffic to the SSL appliance so it can be checked by the IDS and virus-scanning services. Once it passes through this outer crust, it's passed back to the load-balancer/content switch and on to your servers.

Web Infrastructure in a Side-Arm ConfigurationClick here to enlarge

Another way to provide this functionality is by ensuring that your traffic director can terminate SSL sessions. The device serving as your traffic director would be configured as a reverse proxy for SSL connections, handling all encryption and decryption services between itself and the client. This removes the need for an external SSL appliance but introduces additional issues: If you use the client certificates to authorize access, they will be authenticated by the traffic-management device, not your application servers, unless you're using a traffic manager capable of inserting the user credentials into the HTTP header, such as F5 Networks' Big-IP (see Sneak Preview "Big-IP 5000 Switch Marks New Territory for F5 Networks").

Some industries need to continue SSL sessions all the way to the back-end Web servers. Financial and banking institutions, in particular, fall under stringent security regulations that require all transactions be encrypted regardless of whether they are internal or external. If this is a requirement for your business, you must ensure that your traffic director can terminate the SSL session with the client, and initiate SSL connections with devices deeper in your infrastructure. This type of configuration has an added benefit if your traffic director can use client certificates, which provide protection by offering a more secure method of identification. This isn't a perfect solution--certificate stores are often kept on disk and could be stolen--but it's more secure than nothing. If you go with this option and require a client certificate to connect to your back-end servers, you can prevent direct connections from the outside to your servers.• A client makes a request via SSL. Edge traffic manager terminates the session, and lets IDS and virus-scanning devices examine the request.

• Edge traffic manager connects to back-end traffic manager (or directly to a Web or application server in less complex environments) via SSL using the traffic manager's--not the user's--client certificate for identification. The connection can be made without requiring a client certificate, but the added measure of security from requiring authentication can be a lifesaver.

• Request is serviced and sent back to the edge traffic manager via SSL.

• Edge traffic manager sends the response back to the client via the original SSL session.

Managing client sessions at the edge makes it easier to secure your infrastructure: You are dealing with a known set of devices allowed to connect to your servers and other back-end devices.Let's take a minute to discuss key storage. Never store your server's private keys on disk on the same server that uses those keys to encrypt your data. Instead, use smartcards or an HSM (hardware security module) for key storage and management. To steal the keys, these solutions require physical access to the card reader. They provide better security because access to the keys is managed via the smartcards. Rainbow Technologies, nCipher Corp. and Ingrian sell products for securely managing your keys. Don't simply use a removable storage device for these purposes because most operating systems view these devices as mounted file systems or drives, so they are remotely accessible.

Load-Balancers and Connection Control

Just as you must ensure that your firewall is configured correctly and lets only traffic on specific ports pass into your infrastructure, it's equally important to configure back-end devices, where possible, to accept connections only from specified clients.

OPR or DSR ConfigurationClick here to enlarge

Controlling who and what can connect to your servers provides for tighter access control and, thus, better security. If you have a load-balancer that can change the source IP address (one that can fully terminate TCP/SSL), your back-end Web servers sitting behind that load-balancer should not accept Web requests from a device other than the load-balancer. In other words, use the TCP-based access control available or create a "firewall sandwich" that allows connections only from specified, trusted sources. Again, start with a deny-all policy and open the machine only as far as necessary to let it perform its tasks.

This is particularly important for your databases because that's where you keep the crown jewels. If you aren't encrypting the data (and you should be), you need to take particular care that access is allowed only from those systems that need it.Although performance increases can be gained by implementing this type of configuration, the servers are left vulnerable because the Web servers must have a direct route back to the client. That means the client has a fairly direct route to the server, opening it up to possible intrusion unless you implement additional firewall rules and placing a heavier burden on the firewall. Be wary of using OPR (out of path return) or DSR (direct server return) configurations on Web and application servers behind a load-balancer (see "OPR or DSR Configuration"). These configurations use a server load-balancer to make the connection from client to server but let the server send its reply directly back to the client, bypassing the server load balancer.

Application Protection Systems

Closely related to IDSs, APSs (application protection systems) detect and act on abnormal traffic patterns at the application layer (Layer 7). There are two major differences between IDSs and APSs. First, an APS is proactive. When an APS detects an abnormality or a malicious request, it can block the request from reaching your servers. Second, an APS inspects and acts on data streams as opposed to individual packets. An APS examines the payload of a transmission and determines the validity of each request as a whole. An APS should be in front of your Web server or load-balancing solution (see "APS Placement").

These systems, such as those provided by Kavado, Protegrity, Sanctum and Stratum8, work by building a set of policies from observed communication between clients and servers. Any request that deviates from that base set of policies is considered an attack, and the system can act on it according to configurable rules.

These systems can detect a variety of attacks, even before patches for exploits are available. The Nimda and Code Red viruses, which propagated through abnormal requests, could have been prevented by an APS long before the patches were available.

You can create your own minimalist APS by using a content switch and letting only specific requests flow through to your servers. By mapping out the entry points into your applications, you can deny requests that do not match those entry points. Because form input is concatenated onto the URL when you use the get method, using the post method for submission of forms will make this process easier.

Let's assume you have a simple application with three customer-facing pages: login.php, enterorder.php and showorder.php. If you use the post method to submit data, you can specify these three pages in your content switch and allow only requests with these three specific URLs to your back-end servers. The reason you don't want to use get is because the rule set grows increasingly complex with a highly variable URL, such as is used in an e-commerce site. While the content switch can certainly determine the entry points when using the get method, the post method is more secure and simpler because the entire form--including the session ID--is transmitted in the payload rather than the header. Yes, if you have a large number of pages you could end up with a huge rule set, but consider how much work would be involved if an attacker wiped out your data or compromised your server. The potential losses in time, reputation, fines and possibly lawsuits if sensitive customer information is released should justify the cost.

You may also consider using an application proxy firewall. Instead of connecting directly to a service, the connection is made to the proxy, which then connects to the desired service and returns the data to the client. This method can stop network attacks and a large percentage of application attacks, such as buffer overflows and protocol violations, because it controls the connection. Such an implementation can also provide audit logs and authentication, offering more control over access to your environment (for an example of a Web application firewall and information on how these products can protect you, see "AppShield Inspects and Protects Your Web Apps From HTTP to Z").

Technology editor Lori MacVittie has been a software developer and a network administrator. Most recently, she was a member of the technical architecture team for a global transportation and logistics organization. Send your comments on this article to her at [email protected].

Apply security patches. Keeping exact server replicas, including content, will let you test patches to ensure they don't hurt your environment. (This also affords you a handy backup in case of hardware failure.) This may seem elementary, but many corporations out there have yet to apply the latest security patches for servers providing outward-facing Web-based services. Keeping on top of security patches may be a full-time job -- if so, dedicate a resource to the task. Many of the various worms and exploits that swept the Web months ago are still active; for example, Nimda and CodeRed, despite the wide publicity, are still active across the Internet. If you haven't applied the patches do so now. Right now. We'll wait for you.Sun Microsystems patches

Microsoft TechNet Security

Linux distribution patches

Now check your firewall as well. Many popular firewalls are software implementations deployed on an operating system with known exploits and vulnerabilities. Don't forget to double check this first line of defense.

The next thing you need to do is turn off extraneous services that may be running on your servers. Plenty of tools are available to provide you with a list of services that are accessible on your servers, so get one and run it against the servers that make up your Web infrastructure:Nmap; Saint

Shut down the services that aren't absolutely necessary. Are you storing customer information in a database? If so, you need to encrypt the data. All of it would be best, but if that's impossible at least encrypt sensitive data, such as credit card and account numbers and private customer information. Doing so will ensure that if the unthinkable happens and an intruder is able to access your customer data, it will be useless. You can use software such as Application Security's DbEncrypt (www.appsecinc.com/products/dbencrypt/) or an appliance, such as Ingrian's i140 (for a review, see "When the Front Line Is Breached, Ingrian i140 Puts Up a Good Fight"), to encrypt specific fields within your database. Or you can write your own method of encryption -- anything is better than clear text. Certainly, the more complex the method, the better, but a little protection is still better than no protection.

How are your firewalls configured? You'd be surprised at the number of misconfigured firewalls that allows traffic you don't want to flow through to your servers. Allow only port-based traffic to flow from the firewall to your back-end servers on the ports that you specify. If access is available only via Port 80, then only traffic on Port 80 should be allowed and only to that specific server. Start with a "deny all" attitude, then open up only what is necessary.

While this is by no means an exhaustive list of attacks that could occur, they are some of the most common means by which your infrastructure can be exploited.

• Cookie Poisoning: Cookies can be a dangerous way of storing sensitive information. Because cookies are simply text-based files, attackers can visit a site and modify the cookies to gain access to your systems.• Database Sabotage: Even though your database may be secured, it still needs to allow access by application and/or Web servers. And those servers are likely to have read and write access. While you can save time by using in your application input field names that match the database, it's a bad idea from a security standpoint. Attackers can take that information and craft SQL statements within text fields or HTTP requests to the servers to perform all sorts of nasty deeds.

• Stealth Commanding: This attack has been around for years and continues to harvest a cornucopia of data for those of malicious intent. By the attacker's appending or inserting commands into text fields, the Web or application server can be forced to execute commands that are often destructive or that reveal sensitive information. This attack can be alleviated by careful consideration of user input inside the application.

• Direct Requests: One of the methods used to garner sensitive information from your site is to directly request it, in effect creating an end run around the checks you may have in your application path. For example, you may have an entry point into an application that lets customers view the status of their orders or their profiles. You generally expect these functions to be accessed from a link on another page, so your verification of customer identity may not be as strong on the order status or profile page. It's easy enough to directly request these pages without going through the expected pages and score a hit, thereby retrieving sensitive customer information.

The No. 1 security rule when developing Web-based applications: Never trust user input. Always check and, if necessary, double check any input received. Don't rely on JavaScript or VBScript checks in the browser intended to force compliance, because this technique is easily avoided by direct request to your servers. For an in-depth look at attack strategies, see our previously published workshop, "Maintaining Secure Web Applications."

Read more about:

2002
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights