Data Center Lessons From The Super 7

Hyperscale companies like Amazon, Facebook, and Google are setting the pace for infrastructure efficiency.

Kevin Deierling

September 19, 2016

5 Min Read
NetworkComputing logo in a gray background | NetworkComputing

"The Magnificent Seven" is one of my favorite old Hollywood westerns, so I wasn’t surprised to see a a remake on the horizon. The title reminds me  of the seven hyperscale companies that have been called the Super 7: Amazon, Facebook, Google, Microsoft, Baidu, Alibaba, and Tencent. What is it that makes these seven companies so super? And most importantly, what can other companies learn from them to make their own data centers work better?

Make it open

The Super 7 are really BIG. In 2008, just three companies -- Dell, HPE, and IBM -- accounted for 75% of the servers in the world. Fast forward just four years and eight companies made up 75% of the server market, including Google,  the first of the magnificent seven to build its own servers. Today, all of the hyperscale companies build their own servers or have customized servers built for them by OEM/ODM partners. Several of them are even designing and building their own “white-box” Ethernet switches and integrating them with their own specialized network operating systems.

While not every company may need to build their own servers or even have customized servers built for them, data center hardware can be a key competitive differentiator for any business. Traditional networking and storage platforms are black boxes with software and hardware delivered as a vendor-defined solution. These closed boxes not only cost more, but stifle innovation and create vendor lock-in and limit customization. By contrast the white-box switches and server platforms originally developed for the Super 7 are now offered by vendors to any business, making customization and optimization easier than ever.

Software-defined everything

The Super 7 have embraced a software-defined everything (SDX) architecture. That means that instead of buying purpose-built compute and storage appliances, they run all their workloads on industry standard servers and use software to create tightly coupled compute clusters and fault tolerant storage systems. This “build-it-yourself” mentality has allowed these giants to streamline their infrastructure by eliminating costly Fibre Channel storage area networks  and running everything on a single, converged network environment.

Software-defined storage and networking are no longer emerging technologies. They're being deployed not only the big players – the innovators – but by enterprise early adopters as well. It won’t be too much longer before the majority comes to realize the incredible control, flexibility and savings a software-defined architecture can provide. Software-defined architecture truly makes data center differentiation a real option for even the most humble of organizations.

Agility is a virtue

Being nimble is critical, and the Super 7 are certainly nimble – they adopt the latest technologies and use automation to cope with the massive scale of their data centers. To ensure the highest levels of efficiency, they upgrade their servers every three years versus the four- to five-year upgrade cycle more typical of enterprise environments. They are able to customize these servers and storage platforms to their exact needs and, because they use SDX architectures, can eliminate costly management and redundancy elements. Instead, they are able to use software to achieve high availability at a rack or even data center level.

It may not be financially feasible for smaller organizations to emulate the speed and frequency in which the Super 7 adopt new technologies or upgrade their servers, but organizations of all sizes can still keep a close eye on which technologies these giants are deploying and which technologies appear to be proving to be most successful. Data center operators will benefit as the hyperscale companies test the waters and then determine which technology makes the most sense for their own organization.

Moreover, by deploying tried-and-true software-defined solutions now, many other upgrades will likely be less costly and easier to deploy down the road. Many organizations get caught up on initial sticker price, but it is really the total cost of ownership that should be considered when making IT decisions. Sometimes this can mean purchasing technology that pushes the boundaries a bit now, but is better equipped to address your future needs.

The long play

Finally, the tech giants use the most advanced networking equipment because they’ve realized it's the only way to get the most out of their servers and storage. They've migrated to 25, 40, 50 and even 100 Gigabit Ethernet to be able to run the maximum number of workloads on their compute clusters. In the case of the cloud vendors, this profoundly impacts their bottom line because, after all, they are selling virtual machines and application workloads to their customers. So the more efficiently they can use their compute and storage infrastructure, the more virtual machines and workloads they can host, and the more they have to sell to customers.

In adopting 25, 50 and 100 Gigabit Ethernet, the Super 7 set the pace for forward-thinking data center architects. These innovators understand that merely upgrading to the next available version or speed isn’t always the best strategy in the long run because all too often that technology becomes outdated before the deployment is even complete. Organizations of all sizes must anticipate future needs every time an upgrade is under consideration. There must be a balance between immediate cost, performance, total cost and most important, future-proofing with every technology purchase.

A blueprint

So like the brave villagers in the movie, who were the real winners, the rest of the industry will reap the benefits of the pioneering efforts of these tech giants. The availability of open platforms has spawned a whole new ecosystem of open networking technologies such as Cumulus Networks, OpenSwitch, and Microsoft SONIC. Furthermore, the rapid adoption of 25, 50 and 100 GbE  by these companies is actually solving the traditional price/volume chicken/egg issue with new technology adoption.

Best of all, they’ve created a blueprint for others to follow and clearly demonstrated that the path to achieve total infrastructure efficiency requires state-of-the art servers and storage combined with high-performance Ethernet networking.

About the Author

Kevin Deierling

Kevin Deierling is Senior Vice President of NVIDIA Networking. He joined NVIDIA when it acquired Mellanox. There he served as vice president of marketing responsible for enterprise, cloud, web 2, storage, and big data solutions. Previously, Deierling served in various technical and business management roles including chief architect at Silver Spring Networks and vice president of marketing and business development at Spans Logic. Deierling has contributed to multiple technology standards through organization including the InfiniBand Trade Association and PCI Industrial Manufacturing Group. He has more than 25 patents in the areas of security, wireless communications, error correction, video compression, and DNA sequencing and was a contributing author of a text on BiCMOS design. Deierling holds a BA in solid state physics from UC Berkeley.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights