Making Layer 7 Work for You
Content networking at Layer 7 is an integral part of a well-designed infrastructure. Learn how to avoid latency problems and make the best use of a Layer 7 device.
February 24, 2003
Layer 4 load balancers also spread content across multiple Web servers, but they route traffic based on port rather than on higher-level application information, such as URLs. Using Layer 4 devices, you have to replicate all Web content and services on every machine in the server farm.
Traffic Patterns
Layer 7 routing may be intelligent and efficient, but having those smarts incurs latency. A slight pause, caused by delayed binding, occurs when the load balancer, XML switch or other content-aware device inspects traffic and decides where to route it. Say a load balancer receives a request for a specific Web page: It first determines which Web server needs to receive it, and then it forges a TCP connection with the server and "binds" the connection to the server.
These steps add a few milliseconds to response time, which may or may not be noticeable to the client. The good news is that Layer 7 devices minimize latency by routing traffic based only on a specific set of headers and the URI. However, some Layer 7 devices, such as F5 Networks' Big-IP, generate even more latency because they route traffic based on more specific information in the TCP payload, such as an HTTP header or data from an HTML form. The advantage is that these devices have more data to consult when deciding which server to use, so their routing decisions are more efficient (see "Major Changes for Big-IP").
A Layer 4 load balancer, meanwhile, does not generate this type of delay because it uses a less sophisticated decision-making process. It binds a TCP connection to the server immediately after it receives a SYN message from the client machine.Á La Mode
You need to determine how and where a Layer 7 content networking device will fit into your network infrastructure. That entails choosing both the "mode" in which the content networking device is deployed--proxy or transparent--and the network topology.
A proxy is an intermediary between two or more devices. When a content networking device is configured in proxy mode, all requests to a Web site or service go directly to it, and the device determines how to distribute the requests. When that same device is in transparent mode, it listens and only intercepts requests for the specific applications it's been configured to handle.
Proxy mode provides a single point of entry into your Web infrastructure, and it centralizes security and consolidates network logging. It has performance advantages over transparent mode in that it can keep open multiple TCP sessions to the servers. That way there's no latency from a second TCP handshake between the proxy device and each individual server in the farm.
Armed & Readyclick to enlarge |
Most load balancers and XML switches offer a proxy option. NetScaler's Request Switch 9000 Series devices, however, multiplex both HTTP and TCP in proxy mode, so they can process requests for content or services using HTTP 1.1 with existing TCP connections. That spreads HTTP requests across a number of connections.Content networking devices usually have to be in proxy mode to process SSL sessions on the Web. A load balancer either decrypts the data itself or has a third-party product do it, so it can examine the traffic and make a routing decision. It then has to re-encrypt its response to the client's request with SSL. Some devices can do this in transparent mode, but that means more latency.
In transparent mode, the load balancer or other content networking device operates like the reverse Web cache, where a router redirects requests to a specific port (usually Port 80) or a specific port/IP address combination to a caching device. This is a less intrusive configuration than proxy mode because it requires little change to the network infrastructure.
The main difference between proxy and transparent mode is that in proxy mode the content networking device terminates the session, whereas the Web server terminates the session when the content networking device is in transparent mode. In both cases, the content networking device remains responsible for determining which Web server should fulfill the client request (see graphic "To Proxy or Not To Proxy").The topology of your network dictates where the content networking device physically sits. There are three server-farm topologies: inline, one-arm and side-arm.
When a content networking device is deployed in an inline network topology, it sits between the router and the network switch that's physically connected to the server farm. The downside of this configuration is that all traffic must return via the Layer 7 device regardless of whether the device needs to see the traffic on the egress route. If the device can't handle high throughput, performance will suffer.
Deploying and configuring an inline topology with a load balancer in proxy mode is simple. But high-availability Web environments with this topology need an additional load balancer to support failover and to avoid a single point of failure (see "Sharing the Load," page 71). The inline topology diagram above illustrates this type of setup.In one-arm and side-arm topologies, the Layer 7 device hangs off the network switch rather than being sandwiched between the router and switch (see "Armed and Ready," page 69). The main difference between the two topologies is the number of interfaces between the content networking device and the switch: One-arm topology uses a single interface; side-arm uses two.
So which topology do you use when? It depends on the amount of traffic passing through the switch. If the switch has heavy traffic, side-arm is best; otherwise, one-arm will suffice.
Staying Inlineclick to enlarge |
A word of caution: When your content-networking device is configured in proxy mode in a one-arm or a side-arm topology, you must reconfigure the default gateway on the servers to point to its IP address. But if the device is configured in transparent mode in this case, reconfiguration likely won't be necessary because the device automatically intercepts traffic destined for the specified ports and/or IP addresses.
Be More Direct
A key component of a Web services configuration is setting up how your Web servers get content to the client machine. There are two ways to configure this in a load balancer, the most common of which is to have the device provide the client access to the Web content. Another approach, called direct-return configuration, has the Web server ship the content to the client rather than having the load balancer or other content-aware device handle the task.Different vendors use different terms for this direct-return configuration. Nortel calls it Direct Server Return (DSR); F5 Networks calls it nPath; Foundry Networks, SwitchBack; and Radware, Out of Path. But don't let the terms throw you; they all mean when the server sends Web content directly to the client. In our Real-World Labs®, we use the term DSR when we review a content-aware device because it best describes the direct-return configuration.
DSR makes sense in cases where outbound traffic in the Web infrastructure is significantly heavier than inbound traffic. Eliminating that extra stop at the load balancer for outbound traffic can increase the throughput and response time of your Web infrastructure, a big plus when you're load balancing heavy content.
To Proxy or Not to Proxyclick to enlarge |
When outbound traffic isn't heavy, DSR may be unnecessary. There are about 10 outbound packets for every inbound packet in a typical Web infrastructure, so if you're handling 100 inbound packets per second, that's only 1,000 outbound packets per second. That's not enough to tax the load balancer or other content networking device, so DSR wouldn't be the best fit here.
Inline, one-arm and side-arm topologies are often used in conjunction with DSR so servers in the farm can bypass the content networking device and route directly to the client. Then you don't have to reconfigure servers in the farm when you install a load balancer.
Upon Closer InspectionContent networking will continue to be a major part of a successful Web infrastructure design, especially with the emergence of Web services and XML for e-business. Load balancers and other Layer 7 content networking devices from F5 Networks are already digging even deeper into application traffic, giving you more control in managing your Web traffic and making it easier to develop automated e-business and e-commerce applications. So it's important to know the ins and outs of how to design your Web infrastructure with these devices.
Technology editor Lori MacVittie has been a software developer, a network administrator and a member of the technical architecture team for a global transportation and logistics organization. Write to her at [email protected].
Post a comment or question on this story.You can't talk load balancing without considering failover scenarios. Because a load balancer often is the single point of entry into your Web infrastructure, you need a backup load balancer to pick up the slack when your primary device fails. Beware, though, that adding a second device means deciding on yet another configuration.
The two main configurations for load-balancing failover are active-active and active-standby. In an active-active failover configuration, two load balancers can service requests for the same IP address. That means your Web infrastructure can serve more clients simultaneously, and latency is reduced in the failover process.
In an active-standby configuration, one load balancer is responsible for serving requests, and a secondary device takes over when the first one fails. When the standby load balancer steps in after a failure, it assumes the primary device's IP and MAC addresses, and handles all requests until the primary device is restored. The trade-off with this approach is that the failover process can take several seconds, during which time any incoming requests for Web content could get rejected.When a failover does occur, the sessions are handled in one of two ways: stateful or stateless. Stateful failover is more reliable because when a load balancer fails, no TCP or Web sessions are lost. That's because the session table gets synchronized with the secondary device (in the active-standby model), or both devices are synchronized (in the active-active model). Stateful failover requires more hardware because mirroring sometimes decreases the number of concurrent sessions a device can handle.
Stateless failover, meanwhile, doesn't save existing sessions during the transition from a failed load balancer to a backup device, but it does re-establish those connections with the backup load balancer. So should you go stateful or stateless? That depends on how important in-progress TCP or Web sessions are to your business.
You May Also Like