Inside HP's Converged Infrastructure

Our interview with Gary Thome, chief architect of HP's Infrastructure Software and Blades group, who talks power and cooling like you've never heard it before. Plus, why he thinks Hewlett-Packard's data-center play tops Cisco.

Alex Wolfe

February 10, 2010

10 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Our quest to learn about different vendors' approaches to Infrastructure 2.0 and to get beyond the hype takes us this week to Hewlett-Packard, which has bundled its combined server, storage and networking play under the "converged infrastructure" umbrella. In this column, I'll focus on my chat with Gary Thome, chief architect of HP's Infrastructure Software and Blades group.

First, some context: HP's intention to make its converged infrastructure the centerpiece of its enterprise push was emphasized on January 13, when HP CEO Mark Hurd and Microsoft chief Steve Ballmer held a joint press conference. The three-year deal announced by the two companies, around what they call an "infrastructure-to-applications model," translates as "we're going to drive customers to Microsoft software and HP enterprise infrastructure." This may be the most astute move yet that HP has taken to blunt the high profile Cisco has achieved with its Unified Computing System, a competing Infrastructure 2.0 play, which similarly combines servers and networking.

Cisco was secondary--though certainly not avoided--in my discussion with Thome. I primarily wanted to hear about the hardware and software guts behind HP's converged infrastructure.

Gary told me he was trained as an electrical engineer, and that quickly became apparent during our talk. (As one EE to another, I can recognize these things.) The definitive tell was that my marketing questions were met with mostly the kinds of talking-point responses one learns during media-relations training, but Thome really got passionate when we began talking power and cooling. Now, power and cooling are generally boring subjects to hear about, but Thome piqued my interest because he made a clear case that these areas--and the techniques HP is applying therein--are differentiators which can pay big dividends in the data center.

Long an unpleasant line item on facilities managers' budgets, the angst caused by hefty electric bills burst into public view in 2006, when AMD rented billboards in New York's Times Square and by the side of Route 101 in Silicon Valley. The publicity stunt was intended to imprint the scrappy semiconductor maker's stamp on the energy issue. Their argument was that you could lower your data center's electric bill by using AMD Opteron-based servers.

Putting aside AMD's skin in this game, there's clearly a point to be made. According to perhaps the most authoritative estimate around, by Jonathan Koomey of Lawrence Berkeley National Laboratory, working off of IDC-compiled numbers, server electricity use doubled between 2000 and 2005. (Those are the most recent figures.) On the bright side, there's some evidence that the shift to cloud computing is pushing overall consumption down. Managing power and cooling is a big deal, because as data centers grow, so do electric bills. According to this Uptime Institute paper, it can cost $25 million to add a megawatt of capacity to a data center. With server racks run at 1kW - 3 KW, you're quickly talking real money if you're not careful.

But that's a macro issues. On the micro side, insofar as what HP is doing to engineer its boxes, here's what Thome had to say: "We've built into BladeSystem the ability to throttle pretty much every resource. We can throttle CPUs, voltage-regulator modules, memory, fans, power supplies, all the way down to trying to keep the power consumed as low as possible at any given time.

"We have the ability to put power supplies into low-power mode, and then shed power onto other supplies, while still maintaining redundancy. This allows the power supplies that are running to run at the highest efficiency levels. " Thome adds, "It's not just power supplies. We have variable-speed fans. Plus, the fans are set up in a zone, so if one part of the chassis is running hot, those fans will run faster, and on another part of the chassis, the fans will run slower."

I interjected that I figured processors were a big area of focus, because I knew that both Intel and AMD have implemented on-chip power management capabilities. Thome responded that memory actually consumes more power.

That's a relevant point, because the dynamic we're seeing in the data center these days is that servers are being configured with much higher memory densities. This is driven, of course, by virtualization, which in turn has upped the number of logical processor instances in each rack. So it makes sense that purchase orders would specify more memory. "We've seen customers move from a relatively small number of DIMMs, like four or eight, to eight to 16 DIMMs," said Thome. "That's been a transition driven largely by the multi-cores and the virtualization going along with it."

Off-the-shelf servers are ready to rise to the memory challenge. For example, HP's ProLiant BL-490c server can support up to 18 DIMMs and 288 GB of RAM in a half-height blade form-factor. Sixteen of these blades can fit into a single enclosure. It's also got dual-10GB networking on the motherboard, which speaks to the increased connectivity demands, but I'll say more about that later.

Turning back to the processors, Thome noted that HP has abetted the Intel- and AMD-provided power management features with controls of its own. "We have hooks built in so we can throttle the processors independently of the operating system to maximize power savings while still being able to deliver expected performance levels," he said.

Closing the loop on the electricals, I'd say I've never written as much about power in one place. However, I felt it was important to let Thome have his say, because it indicates how crucial this area is. This also spotlights the truck-wide gap between what the hype cycle says is important and where engineering efficiencies are really implemented. So power and cooling are on my list to watch, even if I don't dream about them at night.

Market Battle

Before we get into virtualization and network connectivity (aka 10-GB Ethernet)-- the two other big Kahunas of the modern data-center play--I'd like to turn to the market competitiveness issues. That's because we're seeing a titantic battle for server share among HP, IBM, Dell and Oracle/Sun.

The traditional way of looking at this market is that these vendors are all jockeying for position. Every quarter, you check the IDC figures and see who's on top in terms of both revenue and unit sales. This stuff is like baseball standings, because the leader in one category might not be on top in the other. However, historically, it's been a slowly changing playing field in terms of quarter-to-quarter movement.

Today, though, we might be on the cusp of an inflection point. Cisco, in the role of the big newcomer, is up-ending the market. That's because, with its Unified Computing System push, it's no longer selling just networking, but has bundled servers into its product equation. True, it's offering the servers alongside networking as part of a total data center solution.

Nevertheless, the fact that Cisco is indeed now marketing servers--just as HP, IBM, and Dell also sell networking--has forced a competitor response, is changing the market dynamic.

To get some quantitative perspective on where the battle stands, let's look at the latest IDC server numbers. The latest figures, released in December, are for the third quarter of 2009. According to IDC: "IBM and HP ended the third quarter in a statistical tie with 31.8 percent and 30.9 percent of overall factory revenue market share respectively." Dell was third with 13.5 percent; Sun (now part of Oracle) came in fourth with 7.5 percent and Fujitsu's 5.7 percent share took fifth. "All others" comprised the remaining 10.6 percent. The full data-dump, including revenue numbers, is available in the IDC press release.

Cisco normally entered the server market in early 2009 with blades integrated with network switching. Gartner has pegged Cisco as a "visionary" in its magic quadrant outline of the blade space. I don't see any breakout of Cisco's ranking in the IDC figures, so we'll have to say they fall within the 10.6 percent share carved up among others. I hope to talk to Cisco for a future "Server Den" column, and we'll get their market-positioning and product scoop direct from the source.

Turning back to HP, I was interested in hearing Thome's take on Cisco. He didn't mince words. "What they're got is, they've got blades," he said. "We would argue they're not as good as what we've got." Thome segued into a comparison: "When we announced our c-Class BladeSystem in 2006, that was built on top of years of expertise. Within one quarter, we were the number one blade vendor and we grew a 12% share in three quarters. To date, we've shipped 1.6 million blades on it.

"If you look at what Cisco has announced, they've got a blade closure with eight blades in it. Whereas we have one with 16. So they wind up being less dense, but actually need more cables. They deliver less bandwidth than what we can deliver with C-7000, but actually require more power, and they've got a more limited portfolio of products compared to what we have on our c-Class BladeSystem."

An aside: Normally I don't let vendors go on at length about how they're much better than a competitor, because usually that comes across as ad-talk or a sales pitch. Indeed, in the old days of trade publishing, the accepted standard was to do a muted "he said, she said," which resulted in tepid copy and never illuminated the intense market squabbles existing not far beneath the surface. However, in today's Web 2.0 world, I'm going with full quotes because I want you, dear readers, to get a sense for the intensity of the competition. As well, like I said earlier, I will let Cisco have its say in a future column.

Converged Infrastructure

A column on HP's converged infrastructure would be incomplete if I didn't discuss HP's converged infrastructure. So here goes. The architecture merges not only the expected buckets of processing (servers), storage, and networking, but in HP's taxonomy it also includes power and cooling, and management software. This, it's positioned as a total play through which data-center efficiencies can be achieved.

Here's Thome again, on HP's architectural vision: "BladeSystem Matrix is our first step into the converged infrastructure visionHP Virtual Connect is the second key component of the overall architecture. We've developed blended storage solutions as well."

Virtual Connect is a key piece of the architecture, negotiating the connection between the servers and networking. It's intended to make the server component appear as one system to the external LAN and SAN. Its operational goal simplifies the connection between said servers and those LANs and SANs and greatly cut down the time admins have to spend setting up these connections.

As for the management software, while not exciting on the face of it, it's a more crucial element than one might initially realize in making the whole package operate smoothly. "It's built on top of some of our leading operation-orchestration technology, that we acquired through our OpsWare acquisition of 2007" said Thome. "It allows you to create a graphical template which describes your entire workload."

What have we missed? I haven't discussed the 10-GB Ethernet angle. As with increased memory densities, the shift to 10-GB is being forced, in a not-insignificant sense, by the increased use of virtualization. (More "processors," hence more need for I/O.) Or, in Thome's words: "Virtualization runs great with a lot of cores, but along with that, you want to have a complete balanced system. So it's driving things like 10G connectivity and networking. We have 10GB Ethernet on the motherboard of most of the blades we sell."

Another option enables slicing and dicing the connection. HP's Virtual Connect Flex 10 capability can take a 10GB connection (more precisely, two, so there's redundancy) and make them appear like multiple NICs to the server. This is an easy way to access the higher speeds in an existing infrastructure, as well as nice connectivity into VMware setups requiring multiple 1-GB NIC hook-ups.

HP's Converged Infrastructure page is here.

 

About the Author

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights