Rethinking Data Center Design
With the skyrocketing number of connected devices and data processing requirements, data center operators are migrating to a new open architecture that's focused on virtualization.
August 20, 2014
By the end of the decade, the number of connected devices is expected to reach 50 billion. These billions of devices are generating a massive amount of data: It's estimated that, as early as 2017, 7.7 zettabytes of data will cross the network. This influx of data processing requirements represents a massive challenge for the data center ecosystem as operators abandon client-server and LAN architectures in favor of a design that emphasizes the use of virtualization in servers, storage, and networking.
Increasingly, companies are embracing a more flexible, open platform built on the technology pillars of mobile computing, cloud services, big data, and social networking. Trend setters such as Facebook are building megascale data centers to handle the tremendous bandwidth demands and workloads. Facebook has said it achieved $1.2 billion in savings as a result of its open-platform approach.
Many businesses and enterprises are embracing cloud computing, essentially buying compute capacity from a third party, saving them the capital and operating expenses of running their own data centers. As a result, cloud service providers are among the heaviest investors in open-platform, megascale data centers. Traditional server vendors, which provide high-level service but do so at a premium, are likely to face serious competition from open-platform vendors, which provide a less expensive, more flexible, and scalable infrastructure.
Using an open-platform approach means looking at a data center development project as a whole. Though servers are a core technology, it's important to look at the entire system of servers, storage, networking, and software together and take a fresh approach to how those components need to be better integrated to bring truly disruptive change to the data center.
Servers
An open-platform approach touches on more than just the server, but the server still plays a critical role in delivering the capacity, processing speed, and energy efficiency demanded of the next-generation data center. Servers must be built to house scores of virtual servers in one physical server in order to increase server utilization as virtualization becomes the norm. Servers need to be powered by multi-core processors that are both fast and energy efficient, and they must seamlessly interact with increasingly virtualized storage and networking systems.
Many semiconductor companies and server manufacturers are developing servers running on ARM-based processors instead of the industry standard x86 architecture. ARM processors are common in smartphones and in emerging devices as the Internet of Things trend takes hold, connecting home appliances, automobiles, and various sensors to the network. ARM is helping companies develop processors with innovative multi-core CPUs that deliver true server-class performance and offer best-in-class virtualized accelerators for networking, communications, big data, storage, and security applications.
Networking
The modern data center will also need faster network connectivity, replacing a gigabit Ethernet (GbE) connection with 10 GbE, 40 GbE, and eventually 100 GbE pipes. A 10 GbE fabric network -- on which traffic flows east and west as well as north and south -- promotes energy efficiency, manageability, and a flexible use of computing resources through network virtualization.
Simultaneously, a new Ethernet specification has been developed to improve the speed and lower the cost of Ethernet connectivity between the top-of-rack switch and the server network interface controller within a data center. A recently formed industry group (which includes Broadcom), the 25 Gigabit Ethernet Consortium, created the spec to allow data center networks to run over a 25 Gbit/s or 50 Gbit/s Ethernet link protocol.
The specification prescribes a single-lane 25 Gbit/s Ethernet and dual-lane 50 Gbit/s Ethernet link protocol, enabling up to 2.5X higher performance per physical lane or twinax copper wire between the rack endpoint and switch compared to current 10 Gbit/s and 40 Gbit/s Ethernet links. The Institute of Electrical and Electronics Engineers (IEEE), the governing body for Ethernet standards, is considering the technology for a potential future IEEE standard.
Storage
The modern data center built for the cloud also breaks new ground in storage technology with what's known as storage disaggregation. In recent years, storage was aggregated with compute within a server, so data could be retrieved from storage faster. When solid state disk (SSD) caught on as the new medium for storage, the storage in those servers got more expensive. But now that there are faster connections available between compute and storage, storage can once again be separated, or disaggregated and shared, from compute.
New interconnection technology can move data as fast as 40 Gbit/s -- and soon 100 Gbit/s -- with almost no latency. Disaggregation provides data center operators with more flexibility to upgrade or replace individual components instead of replacing entire systems. Organizations can leverage SSD storage for data that needs to be retrieved quickly, and they can use less expensive SAS and SATA drives for less urgent data.
These many technological changes may present challenges to data center managers more familiar with operating in the client-server physical hardware world, but they represent far more promising opportunities to develop more efficient and open megascale data centers that satisfy the growing demand for faster, higher-capacity computing.
About the Author
You May Also Like