How to Put Your ‘Carbonivore’ Data Center on a Diet
By taking a strategic approach to their monitoring tools and tech stack, data centers can both help their bottom line and reduce their carbon footprint.
May 12, 2023
Today’s data centers serve as the backbone of modern computing, expected to adapt continuously to the ever-changing internet and cloud landscape. A complex maze of separate systems, appliances, tools, and servers comprise data centers to effectively capture an endless flow of data and communicate it via network traffic. But this insatiable demand comes with a price.
Data centers account for approximately 2 percent (and rising) of all global carbon emissions and consume vast amounts of electricity. In fact, the IT industry isn’t far from reaching chemical and petrochemical levels, which account for 3.6 percent of global carbon emissions.
Many data center operators focus on physical plant management, assuming that the underlying traffic processing is a sunk cost, one that will always just exist. But sustainability is no longer a nice-to-have for the global data center industry. With added pressure stemming from national and international leaders, many tech companies are working to achieve carbon neutrality by 2030.
The industry is at a crossroads; the importance of power savings in data centers cannot be stressed enough.
Hidden Costs, Hidden Carbon
We’re seeing the push for reduced carbon emissions from the top down. The Biden Administration recently announced a $2.5B investment to scale carbon capture technology to offset carbon emissions. This pressure, coupled with the current economic climate for tech and other industries, is the perfect catalyst for organizations to take action for energy savings.
After all, energy costs are a significant expense for data centers. Consider, for example, that network analytics probes used across service provider and enterprise networks consume 600 W of power to process 16Gbps of network traffic. Monitoring 100Gbps of data center traffic requires seven probes. This translates to 4,200 W of power consumed. A year’s worth of monitoring with just this one tool requires 36,792 kWh, the equivalent of roughly 100 home refrigerators.
Energy costs have been on the rise globally since 2020 and are forecasted to remain elevated through at least 2024. Understanding where decision-makers can make substantive changes within their existing tools can help to significantly reduce energy consumption, which can result in substantial cost savings.
How to Reduce Carbon Footprint and Costs for Data Centers
All data centers use various security and monitoring tools to capture data communications via network traffic. Many organizations use dozens of separate tools deployed in clusters of multiple individual appliances or as virtualized tools running in traditional servers. The costs of operating these tools add up.
One of the most strategic ways to approach a goal of cutting energy costs and the underlying carbon footprint is to determine what network traffic is processed by which tools. Gaining visibility here can help organizations streamline efforts within the data center.
Here are four ways to make this happen:
Application Filtering: Application filtering identifies well-known applications by traffic signature, even when encrypted. This creates a triage system of high-risk and low-risk data by filtering out traffic from high-volume trusted applications, like YouTube or Windows Update. This dramatically reduces the amount of data flowing over the network, thereby reducing the amount of energy consumed by the data center tools.
Deduplication: Analyze each network packet only once. Data center networks today are structured with high degrees of resiliency and redundancy to assure always-on operations and availability. However, this approach also creates duplicate packets across the network, meaning that analytics tools might see two to four times the volume of traffic than is actually created at the end-user level. Deduplication is a method to identify and remove duplicate packets before sending network data to tools. Fewer redundancies result in lower energy consumption.
Flow Mapping: Flow mapping sends specific subnets, protocols, VLANs, etc., of traffic to particular tools. Doing so ensures that only the relevant network data is sent to meet each tool’s needs. For example, an email security appliance would need to see only email traffic. If you think about it plainly, when humans are processing information overload, their energy levels get depleted. The same goes for a data center.
Flow Slicing: Flow slicing is a highly efficient optimization method that drops non-initial packets in every user data session. Many tools only need to see the initial setup, header, and handshake information and do not need to see every packet (video frames, for example). Real-world deployments reduce tool traffic by 80 to 95 percent. There is no reason to inundate the tools with non-essential information as it inefficiently wastes valuable energy.
Combining these strategies can dramatically increase tool usage efficiency, reducing how much traffic they process and the number of tool instances needed. For example, if the tools are physical appliances, fewer devices are required. If the tools are virtualized, the savings are typically even greater since they don’t each require a dedicated piece of hardware, lessening the devices and energy consumed.
Deep Observability is the Path Forward to Power and Cost Savings
Though cost savings is a high priority for today’s organizations, reducing energy usage and committing to sustainable and environmentally responsible outcomes is truly the icing on the cake. Power savings in data centers of all types mitigate environmental impact, saves on operational costs, and promotes sustainability.
Best-in-class organizations should go beyond the physical plant and use intelligent capabilities, such as deep observability, to reduce and consolidate at the source. This approach avoids downstream energy-intensive processing.
By taking a more strategic approach to observability and maximizing tools’ capabilities, while also cutting unnecessary traffic, organizations can help their bottom line and help the planet.
Michael Dickman is the Chief Product Officer of Gigamon.
Related articles:
About the Author
You May Also Like