The Cold, Green Facts

Buying energy-efficient technology isn't the only--or even the best--way to cut down on energy consumption in the data center. Rethinking the way you use the technology you already have can

September 1, 2007

14 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Data centers draw a lot of power. That newsy tidbit was one of the conclusions of the Environmental Protection Agency's data center report to Congress, released Aug. 2. While not surprising, the EPA's gross numbers are nonetheless staggering. Data centers used 61 billion kilowatt-hours in 2006, or 1.5% of all power consumed in the United States. The cost: $4.5 billion, or about as much as was spent by 5.8 million average households. Of the total consumption, the feds sucked up about 10% of that power.

What's truly telling is the EPA's forecast of our consumption under different scenarios. If we do nothing, data center power usage will double by 2011. If we all implement what the EPA considers the state of the art, we could lower overall data center power usage to 2001 levels by 2011--a net swing of 90 billion kilowatt hours.

THE COLD, GREEN FACTS

POLL RESULTS
IMAGE GALLERY

Although the EPA's recommendations were long on generalities (consolidate those servers, etc.) and short on specifics, what's good about them is that they suggest operational reforms, not just adoption of energy-efficient technology. As in our personal lives, we tend to look for technical solutions to operational problems. Sure, switching your quart-a-day Ben & Jerry's habit for a quart a day of Weight Watcher's ice cream might have some health benefit, but nowhere near as much as a little portion control and exercise. So it is with the data center: Buying energy-efficient technology is a fine idea, but you end up much further ahead by rethinking how you use all the technology in the data center you have.

On the downside, the EPA doesn't recommend any actions by Congress. There's an executive order that federal agencies reduce power consumption by a few percentage points each year, but the EPA made no recommendation for financial or tax incentives for private industry. Instead, it recommends recognizing organizations that do well, sort of like the gold stars that your first-grade teacher used. Still, federal recognition of the problem is a positive step; the EPA is working on new Energy Star certifications for a broader range of equipment, including servers and "related product categories," and since the government is a huge customer, manufacturers have a compelling reason to design for that standard.WHY GO GREEN?

History shows that corporate America doesn't take on initiatives that don't contribute to the bottom line, so the lack of government financial incentives means green had better stand on its own as a business practice. There has to be value, at the very least a little feel-good marketing, and a solid total-cost-of-ownership story is certainly preferred. For most green technologies, putting a real financial value on the expected benefit isn't too hard, but actually measuring the benefit across a data center--that's a different story.

The challenge starts with the electric bill. Most data centers are just a room in a dual-use building. Separating out data center power usage often isn't possible without retrofitting the room with sensors or installing a separate power meter. As a result, as an InformationWeek survey of 472 business technology pros confirms, almost no one in the IT organization is compensated based on saving energy, and only 22% of IT shops are responsible for managing power consumption. So while buying energy-efficient designs can reduce a system's TCO, in most organizations IT doesn't see the financial benefit.

Therefore, Job 1 is to get management to recognize and reward IT's efforts to save energy. Once that's done, a TCO calculation that includes power consumption is a lot more useful to IT. Beware, however, the dual-edged sword. Most organizations that charge back for utility usage simply divide the total utility bill by the square feet occupied. When properly burdened, IT could easily see its electric bill go up by an order of magnitude.

Even if you can't enlighten management to the benefits of saving power, there are plenty of reasons for IT organizations to think green. That's because doing so addresses other IT pain points. At a recent conference hosted by Hewlett-Packard, two competing statistics emerged. On one hand, HP maintains that more than half of existing data centers will become obsolete in a few years, but over the same period, it says some three-quarters will underutilize their floor space. The apparent conflict means that floor space is the wrong metric for assessing data center capability. As the density (as measured by power consumption per rack unit of space) of IT systems increases, power and sometimes cooling capacity are exhausted far more quickly than space, at least as data centers are now configured.Our survey reinforces this point, as about half of respondents say they'll be remodeling their data centers or building new ones in the next two years. And as they remodel, they'll find their every assumption about data center design challenged. Here, we'll focus on the remodeling problem, because the greenest data center is the data center that's never built. Sure, the Googles and Amazon.coms of the world can build state-of-the-art facilities next to dammed rivers or geothermal vents, but for the rest of us, the environmentally responsible thing to do is to squeeze every last ounce of potential out of the data centers we have. SERVER CONSOLIDATION

If the greenest data center is the one you don't build, then the greenest server is the one you never turn on. Server consolidation is the first step in maximizing your data center's potential. Not only will it save on power and space, but it can also offer the means for maintaining critical systems that require an out-of-date operating system on up-to-date hardware.

One dual-socket quad-core server loaded with lots of memory can replace 30 or more older, lightly loaded single-processor systems. The power savings just from unplugging the servers will be in the range of 12 to 15 kilowatts, which for Californians means a cool $15,000 per year off the electric bill, and New York's ConEd customers can figure it at better than $18,000. With the server costing about $10,000 and the virtualization software from VMware costing about the same (considerably less if you choose Citrix's XenSource), the investment pays for itself in a year.

Of course, the savings in terms of IT resources managing one rather than 30 servers is even more profound. We're not implying that managing 30 virtual servers is trivial--on the contrary, their virtual nature should be enough to force most organizations to automate server management tasks such as patch deployment. IDC finds that while server expenditures are increasing relatively slowly and may even flatten out with the multicore and virtualization phenomena, the cost of managing all those servers is growing nearly as fast as the cost of powering them. Since management started out as a larger cost, lowering it provides the best direct benefit to IT's bottom line. For many organizations, a well-thought-out server consolidation plan, along with such steps as automating patch management, can pay for itself in less than year, even with staff retraining.

STORAGE ... MANAGEMENT?Many organizations (and, until recently, vendors) dismiss the notion of managing the power consumption of storage systems as either impractical or inconsequential. Both notions are wrong. Particularly in data-intensive industries, the power, cooling, and floor space consumed by storage systems easily competes with that used by servers. Further, storage capacity as a whole is projected to grow 50% annually as far as the eye can see. So savings on storage system power and cooling is anything but inconsequential.

Similar to the server challenge, storage efficiency comes through better management and consolidation. Unfortunately, storage management remains an oxymoron for most enterprises. Sun Microsystems estimates that only 30% of enterprise storage systems are used effectively--a pretty alarming statistic since it comes from a storage vendor. Implementing a storage resource management system and actually using it is about the only way to recover that dead 70% of storage, but the benefit is unquestionable (see Savings Through Storage Management).

CHANGING BEST PRACTICES

The good news is that for most organizations, the pressure to remodel or build new data centers can be alleviated through improved server and storage hygiene. But even as you get more out of existing data centers, new challenges threaten long-held best practices. As certain racks become more densely populated with 1U servers and blade systems, using perforated floor tiles on a raised floor no longer supplies enough cold air for the systems in the rack. For facilities built in the last decade, typical raised-floor cooling systems can exhaust 7 kilowatts per rack. Even today, most data centers won't use that much power per rack, but in certain instances, they can use far more. For example, a fully loaded rack of blade servers can draw 30 kilowatts or more--only specialized, localized cooling systems can handle that sort of per-rack load.

In the past, the advice was to spread out the load. Put blade servers and other high-powered gear in with lower-consumption storage and networking systems, or simply leave the racks partially empty. While it's still good advice for those who can pull it off, increasingly the geometry of the data center doesn't allow it. Spreading out the load can push the average power draw per rack beyond what most data centers can deliver. The answer then is to pull those high-demand systems back together and use rack-based or row-based cooling systems to augment the room-based air conditioning.

Rack-based cooling systems are available from a number of vendors. Two with very different approaches are IBM and HP. IBM's eServer Rear Door Heat eXchanger replaces the back door of a standard IBM rack. The door uses a building's chilled water supply to remove up to 55% of the heat generated by the racked systems.The benefit to this approach is its simplicity and price, which is as low as $4,300. The system, introduced two years ago, removes heat before it enters the data center. By lowering the thermal footprint of the racked equipment, the IBM system can move the high-water mark from 7 kilowatts per rack to about 15 kilowatts, a nice gain for the price. The only downside is that the IBM solution requires water pressure of 60 PSI. Not all building systems can supply that much pressure, particularly if there will be a lot of these racks deployed.

HP's solution is more comprehensive, takes more floor space, and costs considerably more. Introduced last year, its Modular Cooling System also uses the existing chilled water supply but adds self-enclosed fans and pumps. The result is a self-contained unit that can remove 30 kilowatts of heat with no impact on the room-based cooling system. Taking your hottest-running, most-power-hungry systems and segregating them into a rack that removes 100% of their generated heat goes a long way toward extending the life of a data center. The racks cost $30,000 a piece, but if it means not building new data centers, they're worth it.

If you already own the racks and simply want a method for extracting large amounts of heat, Liebert makes systems that mount on or above racks. The company says that its XD systems remove up to 30 kilowatts per rack.

Finally, row-based systems such as Advanced Power Conversion's Infrastruxure and Liebert's XDH use half-rack-width heat exchangers between racks of equipment. The heat exchangers pull exhaust from the back, or hot-aisle side, of the racks and blow conditioned air out the front. Because these systems substantially limit the ability for hot exhaust air to mix with cooled air--with APC's product, you can put a roof and doors on the hot aisle for full containment--they can be much more efficient than typical computer room air conditioning, or CRAC, units. Where CRAC units can draw as much as 60% of the power required by the systems they're meant to cool, APC says its system can draw as little as 40%.

Any of these systems will go a long way toward extending the life of a data center. However, if the limiting factor is the capacity of the cooling towers on the building's roof--that is, the ability of the building's existing systems to produce chilled water--then deploying these rack and row solutions is practical only if you shut off some of your existing CRAC units. The good news is that, quite often, you can do just that.Overcapacity in CRAC units is easy to determine. If you need to put on a sweater, or perhaps a parka, to go into your data center, you have more room-based cooling than you need. With proper planning, the ambient temperature of the data center can be as high as 78 degrees, says HP's Paul Perez, VP for scalable data center infrastructure. Most data centers run at ambient temperatures well below 70 degrees. Perez says that for each degree of increased ambient temperature, figure at least a few percentage points in reduced energy consumption for cooling systems.

PROFESSIONAL HELP

While determining that you've got too much capacity is easy, doing something about it isn't. Most CRAC units are simple: Either they're on or they're off; there's no throttling them down. Less than 10% of CRAC units installed today contain variable-speed motors, but even with the right motors it's not trivial to determine the effect of changing the output of one CRAC unit. Until recently, various vendors had the instrumentation and software capabilities to map airflows and temperature gradients throughout a data center, both in 2-D and 3-D, but no one had the ability to determine the "zone of influence" of each CRAC unit. In late July, HP announced Thermal Zone Mapping, which uses software and instrumentation to measure existing conditions and predict the effects of moving or throttling back CRAC units.

Along with its thermal zone mapping, HP also announced what it calls Dynamic Smart Cooling. DSC was developed with Liebert and STULZ, the two companies that produce the vast majority of room- and building-based cooling units in North America. The partnership lets HP software control the performance of newer CRAC units from the two manufacturers. For data centers built in the last five years or so, CRAC units may only require the addition of a controller board to interface with the HP system, provided those systems are equipped with variable-speed motors. Older CRAC units must be replaced to participate. HP claims DSC will save up to 45% on cooling costs. To achieve those sorts of savings requires more than just deploying a control system. The placement of CRAC units and computer racks will likely have to be rethought as well.

Once you start imagining moving the furniture around, it's time to call in the pros. No IT staff has the time or expertise to lay out a data center for maximum efficiency. Even if you understand the concepts of laminar airflow (which is good) vs. turbulent airflow (bad), you won't have the tools and software to measure what's going on in your data center. And, of course, when it's time to actually rearrange the facility, you'll need enough plumbers, electricians, and IT pros to get the job done in whatever timeframe you have.What's impressive about DSC is that it yields a forward-looking data center design. We can all imagine the notion of a virtualized data center where servers are turned on and off automatically based on business needs. DSC provides the ability to sense the changing cooling needs of such a dynamic data center and make adjustments on the fly. Currently, this is just vision--no data center is this dynamic--but given that you get the chance to redesign data centers perhaps once a decade, it's good to have a vision.

Regardless of who creates the new design, two main requirements are instrumentation and modularity. Instrumentation provides the data necessary to understand data center power consumption, while modularity gives you the means to do something about it. Modularity also permits systems to run at their peak efficiency, something that almost never happens in current data center designs (see Where Does The Power Go?).

As businesses become more conscious of energy use, the solution isn't to throw the latest technology at the problem. What's required is a disciplined, well-thought-out approach that consumes less power, personnel, and capital--and the will to make efficiency a priority.

Continue to the sidebars:
Where Does The Power Go?,
and
Savings Through Storage Management,
and
Moore's Law And You,
and

Reader Poll: Green Data Centers

Read more about:

2007
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights