A GSLB Reality Check

How do you keep your business running even after an outage or attack? One approach is global server load-balancing. But beware of some trade-offs.

March 10, 2005

14 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Multiple data centers lessen that risk. A secondary data center can serve as a backup or hot standby site, or you can use it to share the load of client requests. If one data centers fails, you can redirect clients to another that works. One way to do this is to modify your authoritative DNS server so that it monitors the health of each data center. There are freely available scripts for BIND and other types of DNS servers, but GSLB products from Cisco Systems, F5 Networks, Foundry Networks, NetScaler, Nortel Networks, Radware and others provide the same benefit. Such GSLB devices act as authoritative name servers.

Say www.nwc.com is hosted at data centers in Los Angeles and London, with IP addresses 1.1.1.1 and 2.2.2.2, respectively. A GSLB device monitors the health and load of each data center using a simple ping test, periodic HTTP get requests, or a more advanced interrogation based on SNMP or a proprietary protocol between the GSLB device and the equipment at the data center. When a client attempts to resolve the FQDN (Fully Qualified Domain Name) www.nwc.com, that request eventually arrives at the GSLB device. (For an overview of DNS resolution in GSLB, see "Step by Step," at far right.)

The GSLB device decides whether to direct the client to 1.1.1.1 or to 2.2.2.2. If it determines that the London data center has failed, it will direct all clients to the Los Angeles IP address until London is back up.

The Catch to Caching

DNS servers and GSLB devices can limit the life of an answer using the TTL (time to live) response parameter, which GSLB device vendors usually recommend setting to a low (or zero) value. Unfortunately, most browsers and some popular proxy servers ignore the TTL value, caching DNS answers for 15 minutes to six hours. This browser-based DNS caching hurts business continuity.If your GSLB device performs health checks of your data centers and finds one of them is down, it directs new clients to the good data center, and the browsers cache the DNS reply that corresponds to that center. Problems arise when many clients have already been directed to a particular data center and are still using it at the time it fails (see "When Disaster Strikes," right).

When Disaster StrikesClick to Enlarge

In this scenario, the browser cache gets cleared only in the unlikely event of a user performing certain operations that depend on the browser type and the way it was launched. For example, with most browsers, the DNS cache is cleared only if the user closes all browser windows--even those not associated with your site--or the client system is rebooted. Obviously, a commercial Web site can't rely on users to do this.

The best solution to the browser-caching problem is to use a DNS feature for returning multiple DNS answers--which are also called "A" records--in response to a query. For example, a DNS server, or a GSLB device acting as one, could return both IP address 1.1.1.1 and IP address 2.2.2.2 in response to a query for www.nwc.com. All modern browsers use this DNS feature, and all DNS servers and most GSLB devices support it.

If the authoritative name server returns the A records 1.1.1.1 and 2.2.2.2, and the browser initially connects to the IP address 1.1.1.1 and that site then fails, the browser will silently connect to the IP address 2.2.2.2. The amount of time it takes to connect to the second data center varies between browsers, but typically is nine seconds to one minute. This "silent connection" is not instantaneous because the browser must first make a reasonable attempt to reconnect to the original data center. But such delay isn't unreasonable, considering it's a recovery from catastrophic failure.The use of multiple A records poses other problems, however. For one thing, it effectively breaks most other GSLB functions. Most GSLB devices have a feature that's intended to work like this: An "ordered" or "preferential" list of A records is returned so the client in our example gets directed to IP address 1.1.1.1, with a backup address of 2.2.2.2. But in reality, ordered lists of A records don't work on the Internet. The vast majority of deployed client DNS servers shuffle the A records, outside the control of the GSLB device. So though the use of multiple A records helps with business continuity, it breaks such features as site persistence and link-based, proximity-based and weighted load balancing.

There are other concerns to consider in returning multiple A records. Some proxy servers that perform DNS queries on behalf of clients will round-robin TCP connections among all IP addresses returned, as long as those IP addresses are reachable. For example, a given client accessing www.nwc.com may alternate between the Los Angeles and London data centers as the various pages on the site are navigated. If the Network Computing site is architected so that it doesn't matter which data center a client accesses, this won't be a problem. And this doesn't affect business continuity, because such proxy servers stop directing clients to a down site. However, if shopping cart or sign-on information, for example, is stored only at the data center where the client initially connected--or if the site uses SSL--alternating between data centers will cause undesirable behavior, like a lost shopping cart. The user will then have to log on and start the session all over again.

Cisco, Foundry and NetScaler recommend the use of a BGP (Border Gateway Protocol) failover mechanism instead of the multiple A record approach (for more on this alternative, see ID# 1605rd1). This setup is complex, and in the event of a failure, it most commonly leaves users stuck on the old data center for several minutes while the routes converge. BGP failover may be a better choice if you can't live with the side effects of multiple A records but are OK with a few minutes' downtime after a failure.

Close to You

Aside from business continuity, another common reason for multiple data centers is proximity--you place a data center nearer to the clients who need to access it so the network-topological distance between clients and sites will be minimized, and performance maximized.By definition, DNS-based GSLB products must measure the topological proximity between each data center and the client's DNS server, not the actual client. Unfortunately, client DNS servers typically aren't topologically (or geographically, for that matter) close to their respective clients, and the path that a TCP connection takes between client and data center differs from the path the DNS resolution takes. There is plenty of research on this (see "Sites to See," at left). One popular ISP, for example, collocates all its DNS servers for all its clients in Atlanta.

Any DNS proximity calculation, meanwhile, adds some latency to each new client request. So the question is not so much whether DNS-based proximity works, but whether the gains are worth the added complexity to your application. That depends largely on where your data centers are and where the majority of your client base resides. And don't forget that proximity detection doesn't work in conjunction with multiple A records. There are also non-DNS methods and products for proximity-based GSLB, such as the Nortel Alteon Content Director.

Choosing the best mix of methods and products for GSLB is application-specific, and there are trade-offs. Using multiple A records is the only way to redirect users transparently to a working data center (or at least a descriptive error page) after a failure, but this approach also breaks other things. And DNS-based GSLB proximity algorithms work in theory, but depending on your data center locations and user base, they may not improve--and can even hurt--performance. So it pays to do your homework before making a large investment in GSLB.

Pete Tenereillo is one of the original developers of the first commercial server load balancer and the first firewall appliance. He is an inventor of record on 10 U.S. patents, and has 20 years of software-engineering and product-management experience, including 10 years in the network security and Layer 4-7 industries. He is currently an independent consultant. Write to him at [email protected].

DNS Resolution in GSLB1. The client makes a request to the assigned local DNS server--the client's ISP DNS server farm in Atlanta, in our example. The client must receive an answer or an error.

2. The client's DNS server performs an "iterative" resolution on behalf of the client. It queries the root name servers and ends up at the authoritative name server for www.nwc.com. In this case, the GSLB device is that authoritative name server.

3. The GSLB device communicates with software or devices at each site. It gathers such data as site health, number of connections and response time.

Step By Step

Click to Enlarge

4. The software or device at each site measures dynamic performance. That could include an RTT (round-trip time), topological footrace or BGP (Border Gateway Protocol) hop count back to the client's DNS server.5. The GSLB device determines the preferred site. It returns the answer to the client's DNS server--IP address 1.1.1.1 or 2.2.2.2. If the TTL (time to live) is not set to zero, the answer is cached at the client's DNS server so other clients sharing the server can use the previous calculation (without having to repeat Steps 2 through 4).

6. The DNS answer is returned to the client. The client makes a TCP connection to that site.

If you have multiple data centers, you'd probably like to control client-request distribution between them and to redirect users to the other data center when one fails. At the least, it would be nice to direct users to a Web page that says something along the lines of "please be patient, our site will be back up in a few minutes."

The behavior of browsers, proxy servers and caching name servers make this complex, leaving you in a Catch-22 situation: You can return multiple A records, but that means losing control over traffic distribution. Or you can return single A records and lose high-availability capabilities such that in the event of a data center failure, many users have no access to either data center.

You can use the BGP (Border Gateway Protocol) in your routers to help mitigate this, allowing the return of single A records, combined with a good level of high-availability. But beware that few sites have implemented this complex configuration, so if you choose this path you'll be on the bleeding edge. It's also costly, both in terms of initial investment and required IP space.After some significant searching and discussions with vendors (and unreturned calls from others), I found only one example of existing documentation explaining how to configure products in this manner--from Cisco Systems. And that configuration is not even integrated into a global server load-balancing device. It requires that the GSLB devices and routers be individually configured at each data center and it assumes that when one router can't communicate with another in the remote data center, the remote data center has failed.

NetScaler has integrated specific BGP failover capabilities in the latest version of its 9000 Series, which is both an SLB and a GSLB, complete with server and site health-checking capabilities. Although NetScaler's product documentation doesn't include a description of how to configure the product to perform DNS-based GSLB and BGP failover simultaneously, NetScaler showed me written instructions on which commands to use.

Cisco, Foundry Networks and NetScaler are among some vendors that offer GSLB products that use BGP (or other routing protocols) host route injection (HRI)-based GSLB. They use HRI for both traffic control--directing a client request to the topologically closest data center, for instance--and failover. HRI works by advertising the same IP host or network from multiple locations. This method must be used in lieu of DNS-based GSLB for each Web site or service, and it's for corporate intranet applications, where you can maintain tight control of routing. It's not recommended for Internet-facing Web sites.

A Closer Look

Here are two examples of BGP-based data center failover. Figure 1 shows a simple active-standby configuration, where the intent is for all traffic to go to the Los Angeles data center unless that data center fails:

1) The Los Angeles data center is connected to the Internet through ISP A. Either a router or a BGP-enabled SLB device advertises the network 1.1.0.0/20. Either a router or GSLB device at the London data center is monitoring the availability of the Los Angeles data center, advertising either no BGP route to the network 1.1.0.0/20, or a route with a very high metric.

2) All clients have a single cached A record of IP address 1.1.1.1 for the FQDN www.nwc.com, and therefore all current users are connected to the Los Angeles data center.

When disaster strikes in Figure 2, and either the Los Angeles data center or ISP A becomes unavailable:

1). The router or SLB device at the London data center begins advertising the network 1.1.0.0/20 (or it continues advertising the route with the high metric).

2). After the routes converge, users migrate to the London data center.

Here's an example of how you could use GSLB with multiple active data centers, single A records and BGP failover:

1) The router or SLB device at the Los Angeles data center advertises the network 1.1.0.0/20 through ISP A. The Los Angeles data center is configured with a backup VIP of IP 2.2.2.2, so that it's capable of simultaneously servicing connections to IP address 1.1.1.1 and 2.2.2.2. The network 2.2.0.0/20 is not advertised through ISP A at this time, or is advertised with a very high metric.2) The router or SLB device at the London data center advertises the network 2.2.0.0/20 through ISP B, the London data center is configured with a backup VIP of IP 1.1.1.1. The network 1.1.0.0/20, however, is not advertised through ISP B at this time, or is advertised with a very high metric.

The failure scenario here is the same as in the active-standby example. If the router or SLB device at the London data center detects an outage, it advertises both networks 1.1.0.0/20 and 2.2.0.0/20. After convergence, North American clients are directed to the London data center.

To BGP or Not to BGP

So how do you know if BGP failover makes sense for your organization? First of all, the ISPs involved must be willing to work with you, and to advertise networks that belong to another ISP. Second, the advertised networks must be of sufficient size to accept the route advertisements. For example, an active-active multisite implementation would be best configured using two AS (autonomous system) numbers, with networks of at least /20. Host route advertisements won't suffice.

And this approach doesn't provide failover for specific host addresses in the IP range, so it's not well-suited to partial data center failures. Say www.nwc.com resolves to host IP 1.1.1.1, but www.xyz.com is also hosted out of the same data center and resolves to host IP 1.1.1.2. If only the portion of the site associated with www.nwc.com has failed, BGP failover alone wouldn't work because the BGP route advertisement would have to be withdrawn for the entire /20, directing traffic for both www.nwc.com and www.xyz.com away from that data center.Most important, though, the convergence time must meet your uptime requirements. If both data centers are connected to the same ISP, convergence can happen in seconds, but it usually takes minutes.

And unlike the multiple A record approach, if convergence times are not nearly instantaneous, the client TCP connection attempts will time out, resulting in an unfriendly error message to the user. That may seem harmless, but not if you host a stock-trading site. Imagine the extended cost and number of support calls that could result from 2,000 concurrent users receiving error message and being unable to connect for a few minutes in the middle of a trading day. In our London-Los Angeles example, users force a new connection from their browsers after the convergence period and get redirected to the London data center. But by that time, the damage is done.

BGP failover can be used to provide a good level of high availability, and it works in conjunction with DNS-based GSLB traffic-control features. But BGP failover adds significant cost and complexity. Unless your ISP can provide you with near-instant BGP failover, your users still cannot be "silently" redirected to a good data center. If money is no object, and you need traffic control features, BGP failover with GSLB is the way to go. Or you can just return multiple A records and live with round-robin load sharing.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights