Combating Increased Complexity
Organizations need an enterprise-wide data and observability strategy, and standardization of visibility to combat the complexity of the digital world.
February 17, 2022
Let’s be honest with each other: complexity can only be hidden, never eliminated. Complexity is a part of operating in a multi-cloud, remote workforce, digital-as-default world, and it’s not going away.
Now, before you begin day drinking in the face of such stark honesty, let’s dig in a bit more on the topic of complexity and what you can do to combat its growing tendrils through IT.
First, let’s look at why complexity exists in the first place.
In the beginning …
Every good story starts with setting the stage. To talk about complexity, we need to recognize that the primary cause is variety. There are a variety of vendors and providers, application architectures and models, and solutions to address everything from performance to availability to security.
Our research this year found that organizations use, on average, 14 different types of services on-premises to secure and deliver applications. Those same organizations also use, on average, 11 different types of services in the public cloud to achieve the same goal. These services range from load balancing to caching to SSL VPN to both network and application firewalls. That includes DDoS and bot protection, anti-fraud, secure web gateways, and ingress controllers.
There is no single vendor that provides the entire list of services that make up this list. So you’re effectively begin asked to use consoles and APIs across core and cloud from multiple vendors.
I could continue to provide hard numbers of the average of cloud providers used, the average number of XaaS offerings consumed, and a very interesting breakdown of the enterprise portfolio into each of five distinct application architectures. But I won’t beat you over the head with that right now. Suffice to say that complexity is an inevitable product of the diversity of an increasingly digital business that operates in multiple clouds, with many different services, a robust application portfolio, and likely a remote workforce.
Complexity is the result of variances across vendors and environments that require matching up round pegs and square holes.
Yes, I meant to mix that metaphor. Cause it really is like that today.
Combatting complexity with standardization
Platforms such as OpenShift and OpenStack were the earliest attempts to standardize provisioning and management of compute, network, applications, and services across environments.
Tools like Terraform offer similar standardization for deployment, hiding the gory details and, thus, the complexity under a more elegant means of interaction.
Standards like OpenTelemetry, which standardizes the generation and format of the operational data (telemetry) that every role needs to unlock visibility across every layer of the IT stack., no matter where it might be located.
Distributed cloud attempts to standardize private and public cloud with an abstraction layer that presents as a consistent interface no matter what the underlying environment might look like.
What all these have in common is that they attempt to normalize the operation of the traditional "infrastructure" domain by offering an abstraction layer that stays consistent regardless of the target environment.
The complexity is still there. It’s just been obfuscated under a standardized set of interfaces.
If you're honest with yourself, it isn't the complexity that bothers you. It's having to deal with the complexity every hour of every day. I certainly don't mind that today's Wi-Fi access points automatically discovers what it needs on my network and requires no configuration whatsoever to start working.
The complexity is still there; it's just been hidden from me.
Whether you’re just embarking on or continuing the journey to modernize IT in support of digital transformation, you’re going to run into a lot of opportunities to standardize as a means to reduce complexity and, through it, the cost of operations.
Prioritizing standardization of visibility
One of the highest priorities for IT in its battle to manage complexity should be standardizing visibility.
The capability to understand the current status and performance of all digital assets, everywhere, is a critical one that underpins just about every other capability in a digital business. It’s a key weapon to combat the complexity inherent in operating a distributed digital portfolio. That’s because the cost of operations is not just in deployment and day-to-day activities. It's also in the too-often lengthy and frustrating process of digging into incidents, attacks, and performance degradations.
And it’s costly. According to Gartner, the average cost of IT downtime is $5,600 per minute. Much of that is borne by operations as they try to piece together what happened – and why – from all the various sources of telemetry spread across systems and environments. A contributing factor is the reality that data siloes make it difficult to analyze relevant information in context. There's often no way to correlate the data from the app to data from its webserver to data from the services that delivered and secured it. In short, complexity frustrates the visibility operations need to quickly assess an incident and address it.
Unifying your approach to visibility is not a small task. A standard method and format like that offered through OpenTelemetry opens the door to the data, but you still have work to do.
As you’re reevaluating infrastructure and application services, make the inclusion of OpenTelemetry a top criterion. If that load balancer or WAAP doesn’t natively support OpenTelemetry, dig into plug-in or ecosystem options – or consider an alternative.
As new applications are being developed, instrumentation should be a requirement – both of custom code and the platform the app relies on. This is not just about technology; it's a key enabler of digital business capabilities around customer experience that rely on telemetry.
Being able to generate normalized telemetry is great, but where will you store it? How will you analyze it? A data and observability strategy for digital business is not a nice to have; it's a necessity.
Digital business – modern business – relies on data to manage the customer experience, defend against attacks, and uncover insights that increase revenue. Visibility into customer experience and behavior is critical when that experience and engagement occurs in a digital business.
Organizations need an enterprise-wide data and observability strategy, and standardization of visibility must be a top priority if you want to combat the reality of operating in a complex digital world.
Related articles:
About the Author
You May Also Like