The Importance of Instrumentation to Visibility

Instrumentation of components from code to application services to client is the only viable means of achieving the end-to-end visibility.

Lori MacVittie

May 12, 2020

4 Min Read
The Importance of Instrumentation to Visibility
(Image: Pixabay)

Visibility is a challenge. It's always been a challenge, but the adoption of cloud and modern applications based on containers has increased the magnitude of and urgency to address it.

A Kentik survey of AWS re:Invent attendees found fascinating insights on the subject of visibility. To achieve it, "59% of respondents reported using at least two tools for visibility into their cloud applications. Thirty-five percent (35%) of respondents use three or more tools."

These tools include log management, application performance management (APM), network performance management (NPM), and a broad range of open source tools. They need multiple tools because practitioners understand that visibility into one layer– the application – isn't enough to realize full visibility. The network, the operating environment, and dependent resources like storage and third-party components all play a role in reaching the nirvana of monitoring: total visibility.  

Using tools to spy on and extract the measurements needed is no longer viable. In the days of a single data path, with absolute control over network and compute, and fairly static applications, this model made sense. We had control over every component at just about every hop in the data path and thus had the ability to spy on them all via established protocols or an installable agent.

Today, modern apps have disrupted this pattern with volatile instances and dynamic data paths. Cloud distributes those paths and eliminates significant amounts of control.

Adding another layer of complexity is the composition of the data path itself. Organizations today are operating, on average, over ten distinct application services to deliver and secure applications. Those ten are not almost never operated by the same teams. Siloed, single-function IT organizations are still the norm, and each has varying responsibility for those application services. Developers own the application. DevOps owns the application infrastructure. And yet another team might own the cloud experience.  

Taken together, this all makes extracting and making sense of visibility data a Sisyphean task.

A better route is to enable components - whether application, network, or system – to emit the information we need. This eliminates the frustration over a lack of control in the cloud and remediates the problem of constant change in container clusters. If every component was instrumented to emit telemetry – remotely collected measurements – we might finally be able to achieve end-to-end visibility.    

This notion – end-to-end visibility – has become a critical success factor for organizations that recognize the importance of monitoring application health and performance. Users – consumers and corporate alike – are not interested in what they consider technical details when it comes to poorly performing applications that impact their entertainment or productivity. Especially now, amidst a pandemic that keeps us close to home and reliant on applications for everything from school to work to getting our groceries.

It's incumbent on practitioners to care about the details. And they do. Operators are just as frustrated as users by badly behaving applications because operators that must sift through myriad logs, scroll through consoles, and try to piece together the data path to discover the cause of a poor user experience.

Full visibility from the code executing in a container in the public cloud across the application services that deliver and secure it to the browser in which the user has the application loaded is an imperative now. A holistic, comprehensive view of performance and health is critical to maintaining the user experience and the sanity of the operator.

That holistic, end-to-end view is not going to come from stitching together logs from two or three or more different tools spying on the data path. It will need the active participation of every component to emit the information needed.

Instrumentation of components from code to application services to client is the only viable means of achieving the end-to-end visibility necessary to generate telemetry and finally achieve total visibility in any environment.

 

About the Author

Lori MacVittie

Principal Technical Evangelist, Office of the CTO at F5 Networks

Lori MacVittie is the principal technical evangelist for cloud computing, cloud and application security, and application delivery and is responsible for education and evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she authored articles on a variety of topics aimed at IT professionals. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University. She also serves on the Board of Regents for the DevOps Institute and CloudNOW, and has been named one of the top influential women in DevOps.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights