Are Converged Systems Right for You?

Converged infrastructure streamlines integration and provides better TCO than traditional systems.`However, more economic and flexible converged solutions may be on the horizon.

Jim O'Reilly

November 12, 2014

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

As we move to commodity-class COTS gear in the data center, the question of who integrates the hardware together comes up. For the enterprise, this can be a daunting proposition, as the current buying mode tends to leave that question to system vendors or third-party contractors.

Most of the large system vendors now offer an alternative: converged infrastructure, which is another way of saying that they deliver all of the hardware -- servers, storage and networks -- pre-integrated.  The units can be delivered in a rack ready to go, or built up into containers for more avant-garde customers.

Pre-integration brings many benefits. All the cabling is done at the factory and the factory test will be performed with the equipment in its installed configuration. This reduces handling damage and other issues, and also reduces the installation work and test cycles by a large factor. The streamlining process ensures that installation is much faster and less frustrating than the normal method of using component systems from scratch.

Another plus of the converged infrastructure approach is that any compatibility issues are fixed prior to shipment. Typically offered in limited configurations, converged systems have been well defined by the vendor, which results in predictable system performance, for example.

Software overlays such as virtualization and network or storage services can be added and tested by the vendor, and in some cases database software and/or app stacks can be installed as well.

Converged systems sound like the answer to a data center operator's prayers: COTS for cloud infrastructure can be bought with a high-level of pre- integration. Of course, converged systems demand a premium for the extra service -- nothing is free! Tying everything to fixed vendor configurations, and having storage and servers from the same vendor may not lead to the lowest cost approach, both in the initial installation and in the subsequent upgrades. In this respect, converged systems buck today's trend against vendor lock-in.

In the short haul, the extra cost of a converged system is still less painful than creating a team to do the job in-house, and living through the inevitable missteps. On a TCO basis, converged alternatives look reasonable up against normal buying.

At the same time, the hardware landscape is changing rapidly. Giant cloud service providers such as Google and Amazon have changed the expectation level of low-cost systems by a huge factor. They buy stripped-down, no-frill hardware, with proprietary cooling, cabling and configuration. While some of this is percolating into the general market, the major impact is that they have changed the supplier chain in a dramatic way.

The world's biggest server makers are a bunch of Chinese ODMs, who are just beginning to encroach on the US enterprise market. They offer low-cost solutions with a high-do-it-yourself content -- in effect, barebones boxes -- which challenge the integrator to use commodity drives and such.

The software-defined data center movement abstracts data services and control from switches and storage arrays, moving them to virtualized server instances. In both software-defined networking and software-defined storage, the hardware profile fits the barebones model the ODMs are driving.

Does this mean that enterprise and mid-tier customers will have to learn hardware design? More likely, we'll see pre-configured systems and converged systems both from system vendors and from fulfillment houses such as Sanmina, Synnex, and Xyratex.

In the mid-tier, integrators can deliver the pre-integration service. It's worth noting that COTS simplifies the integration task considerably, since the servers are in effect identical except for any added drives of NIC cards.

One impact of the convergence process may be the acceleration of the transition to Ethernet for storage connections. Taking Fibre Channel out of the picture simplifies integration, since we are then reduced to a cluster of identical servers, connected by Ethernet to identical storage boxes, which most IT operations can take in stride.

Here's my take-away on converged infrastructure: It's a good idea and it offers considerable TCO savings over traditional approaches. The longer-term evolution is towards simplification of the cluster of converged systems while more third-party integrators will surface to offload the integration task. This evolution will increase options and should lower costs, especially if ODM boxes are in play.

About the Author

Jim O'Reilly

President

Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian. Jim is currently a consultant focused on storage and cloud computing.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights