Tackling Data Center Challenges With Open Ethernet
Data centers are under more pressure than ever. Open networking offers an alternative to closed-code Ethernet switches, giving companies better flexibility to meet growing data center demands.
September 1, 2015
Computing and storage systems in today’s data centers are being pushed to the brink as data sets continue to grow. Data centers must handle exponentially expanding volumes of transactions and data, essentially functioning as the backbone of day-to-day corporate business. Web2.0 and big data infrastructures are struggling to keep up with the need to analyze all this data in real time, and all providers are trying to manage these demands while keeping costs down. The company that finds a way to succeed in managing the data explosion has a competitive edge.
The key elements for handling hyper-scale data demands are better network performance and scalability. As more data from more users enters the network, it's crucial that information flows at faster speeds, enabling greater analysis, and ultimately greater innovation. Whereas only a few years ago 1 Gigabit and 10 Gigabit Ethernet solutions were ubiquitous in the marketplace, such solutions can no longer keep up with the tremendous requirements of today’s data centers.
The new challenge is to seek 25 GbE in place of 10 GbE, 50 GbE in place of 40 GbE, to utilize 100 Gbps in an optimized rack aggregation, and to incorporate offloads that reduce CPU overhead of networking to near-zero. Solutions that can provide higher bandwidth, better scalability through higher port density, and energy efficiency will be required to successfully make this transition.
The new speeds enable the industry to earn profits from the ability to analyze massive quantities of data in real time. Companies can make better-informed business decisions when they can build applications that take advantage of high-performance interconnect that enables faster flow of information. For example, PayPal's robust network allows it to analyze more than 14 million financial transactions and 4 billion database records per day to offer real-time fraud detection.. Similarly, the new field of artificial intelligence relies on machine learning, which is powered by processing and recognizing trends and patterns in the never-ending stream of data.
However, while it certainly improves the ability to cope with the data explosion, speed alone cannot solve the demands of today’s data centers. To be able to scale to meet the ever-growing demand, it is also important that data centers have the flexibility to choose the hardware and software combinations that best meet their needs, and are no longer beholden to proprietary closed solutions. The network of the future is an open one, enabling flexibility and the freedom of choice to optimize software and infrastructure independently in order to gain a competitive advantage from the data center.
Traditionally, companies chose hardware solutions that met their needs, but were then locked into proprietary software systems to interface with that hardware, whether or not the software worked in their best interests. Other than a few huge corporations that could send an army of software engineers to create customizations to work around the closed system, most companies were stuck with an inflexible, sub-optimal solution.
While closed solutions from incumbent vendors have provided businesses with reliable networking technology, their rigidness and proprietary nature have left many organizations unable to effectively manage the data explosion in a fashion that benefits their business. The inability to select best-of-breed hardware and software that best meets their unique network and application needs is restricting their ability to turn their data center into a competitive advantage.
Today, though, there is Open Ethernet, a Mellanox initiative that surmounts the barrier of closed systems. Open Ethernet offers an alternative to traditional closed-code Ethernet switches, and provides companies with flexibility and freedom to custom-design their data centers, thereby achieving higher levels of utilization, optimal efficiency, and better overall return on investment. This, in turn, encourages greater scalability and, ultimately, more innovation.
Thanks to the Open Compute Project, a consortium of companies led by Facebook and others, there is now a standard definition for the hardware elements of Open Ethernet components. This has led to the introduction of switch systems that incorporate such Open Compute designs, providing data center operators the flexibility to select the network elements that best fit their needs, regardless of vendor.
This raises the opportunity for companies to not only manage their data centers as per their specific corporate needs, but also to save money in the process. Companies can reduce their capital expenditures by buying only the specific hardware and software components that meet their network’s needs instead of buying proprietary, locked down end-to-end solutions that limit flexibility and add unnecessary cost. In addition, open source tools and packages for automation and self-service provisioning can be easily implemented to reduce operational expenses.
Furthermore, Open Ethernet encourages innovation within the data center. Whereas previously customizations within the data center were reserved for giant corporations that could throw an army of programmers to overcome the closed systems that made up their infrastructure, Open Ethernet has made open, customizable solutions accessible to organizations of all sizes.
As such, more companies are turning to high-performance Open Ethernet enabled components to adapt and thrive in today's high-speed, data-intensive world.
About the Author
You May Also Like