The Heat Is On
The 1U and the Blade Server are the IT department's Weapons of Mass Dissipation
May 27, 2004
Server technology may be blazing a trail into the uncharted territories of speed and performance, but how you keep your infrastructure cool is the real burning issue.
Two of the modern IT departments biggest Weapons of Mass Dissipation (WMDs), are the 1U and Blade server. These ultra-dense, ultra-compact devices, developed over the last five years, take up far less space than their traditional rackmounted predecessors. They also increase heat density. While vendors claim that their systems are designed to run efficiently in fully loaded racks, they rarely take into account the impact of these systems on the rest of the data room.
The problem is exacerbated by the fact that there are no real standards for designing data center cooling systems that can handle 10 to 12 kilowatts per rack. By the way, 12kW is an incredible amount of generated heat – about the same output as two domestic electric ovens on full blast! You have to do some special things to cool that kind of heat, and if you don’t, you could be in serious trouble.
Both Advanced Micro Devices (NYSE: AMD)and Intel Corp. (Nasdaq: INTC) have launched lower-power processors, although currently neither have incorporated them into blade servers, for the simple reason that low power means lower performance.
However, Intel plans to add power management technology to its Itanium and Xeon processors so users can set thresholds when the processors can be cycled on and off, thus reducing power. But show me an IT manager who is so satisfied with system performance that he is happy enough to reduce it, and I will show you the ash tray I have on my Lambretta.So how do you avoid a blowout? Well, when temperatures around the servers exceed 75 degrees Fahrenheit, which can easily happen in a loaded rack, heat-related problems such as failures or reductions in life expectancy of components occur. To make matters worse, many high-end blade servers are specifically designed to reduce clock speeds as temperatures rise in an attempt to self-regulate themselves, a bit like us starting to perspire when we run (so I'm told!). But often, well-meaning IT departments interpret this as a lack of performance, pop another server in the rack, and... well, add fuel to the fire.
Cooling is a serious business. Each kilowatt of heat generated requires about 140 cubic feet of cool air passing through the rack every minute to deliver acceptable cooling. Many racks in data centers today require over a thousand cubic feet of cool air every minute to keep their mojo intact!
Of course, it’s not as simple as just buying a very large fan. Other elements are critical to maintain the balance. Having too-fast air flows under the data floors creates a “Venturi effect” and actually sucks air away from where it is needed. Too many cables around the servers and under the floors also impede airflow.
Where will this lead us? Well, IBM Corp. (NYSE: IBM), Hewlett-Packard Co. (NYSE: HPQ), and Sun Microsystems Inc. (Nasdaq: SUNW) all say that blade servers will get smaller and more powerful. Thanks to its mainframe history, cooling is a problem that’s as old as the hills for IBM. For Big Blue, it’s a case of back to the future.
Most vendors believe that the only viable solution for the future is water cooling, probably through external chassis systems to cool individual devices. Of course, the more customized the solution, the higher the investment, the maintenance, and the risk of failure.Keep cool. Keep efficient. And may your servers do the same!
— Mike Tobin, Chief Executive Officer, Redbus Interhouse plc
Read more about:
2004You May Also Like