Special Report: Autonomic Computing

Five years ago, IBM's Paul Horn articulated a new way of thinking about Information Technology. In this second article of our business innovation series, we examine how far the

October 20, 2006

10 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Is your infrastructure nearing the point at which systems will no longer serve business needs, or even be maintainable? As complex SOAs (service-oriented architectures) and converged applications like VoIP become the norm, IT is paying the price for integrating too many applications on a point-to-point basis using custom-built, undocumented scripts. The result is unstructured procedural activity, ballooning labor costs and an inflexible service-delivery infrastructure that does nothing to improve customer relationships.

What happened to Paul Horn's grand vision of autonomic computing? Simply, while paying lip service to the goal of automation, our industry all too often took the path of least resistance, solving tactical problems but putting off the hard work needed to achieve self-healing systems.

If you're ready to change course, CA, Hewlett-Packard and Microsoft, along with a cadre of smaller software vendors, have joined IBM in seeking to solve practical IT challenges. But to make automation work for your enterprise, you need an understanding of your environment, business objectives, technical requirements, and existing processes and workflow. With a little self-knowledge, enterprises can achieve significant savings.Actually, we can't entirely blame IT groups for staying in fire-fighting mode. Accelerating business demands are straining budgets and technical capabilities across organizations of all sizes. Increasing complexity is not new, but the pace has picked up recently, and the process of manually crafting and implementing processes is often so labor-intensive as to be impractical. It's often a matter of self-preservation.

And time isn't on our side: Critical business services are dependent on the infrastructure, leading to decreased business tolerance for outages and downtime.

In response, some enterprises are turning to IT process and optimized service management, evidenced by increased interest in ITIL (IT Infrastructure Library). Using the ITIL framework, vendors are developing automated processes to greatly reduce human intervention in repetitive and often error-prone incident management. But ITIL is only a piece of the solution.

Understanding Autonomy

The industry has come to accept five critical components that define automated computing.

1. THE SYSTEM MUST ANTICIPATE THE BEST RESOURCES to fill a need without involving the user. The system should be able to dynamically configure and reconfigure services depending on always-changing conditions.

2. THE SYSTEM MUST CONTINUOUSLY OPTIMIZE ITSELF. The system must always look for ways to improve performance, optimize capacity and improve the customer experience without user intervention.

3. THE SYSTEM MUST HAVE A LEVEL OF SELF-AWARENESS that lets it understand its own components. This is critical to weigh current status, capacity and usage trends and access other resources to expand capacity if needed.

4. THE SYSTEM MUST HAVE SELF-HEALING CAPABILITIES so that performance degradations, outages and security threats--routine or catastrophic--have minimal impact on users. The system must detect these problems--ideally before they occur--and take preventive action.

5. THE SYSTEM MUST BE ABLE TO INTERACT with its external surroundings and be based on open standards. This includes understanding the surrounding environment and being able to create new policies and rules on how to communicate with its environment.

A Practical Approach

Critical to the success of any autonomic computing project is setting expectations early on regarding what can be delivered realistically. Take a hard look at your organization's level of operational maturity (see "How Mature Is Your Organization?" right). The ability to collect configuration information on network and server assets, access customer information for business-impact analysis, and provide data on internal and external SLAs (service-level agreements) must be in place before automation can be realized.

How Mature is your Organization?

Click to enlarge in another window

If you're in the "Turmoil" or "Reactive" stages, as described in the chart, you'll get a much greater return from implementing basic monitoring and management capabilities and developing formal processes to deal with incident, change and configuration management. IConclude's OpsForce, Opalis' Integration Server and RealOps' AMP, among other products, can help organizations seeking to reach the "Proactive" or "Service-based" levels achieve a variety of goals, including:

>> AUTOMATING BUSINESS AND OPERATIONAL PROCESSES among monitoring applications and the service desk to reduce service-impacting outages and cut MTTR>> DEVELOPING A SERVICE-FOCUSED IT STRATEGY to scale across multiple organizational units, including a foundation of consistent operational workflows

>> OFF-LOADING ERROR-PRONE, MANUALLY REPETITIVE TASKS through automated workflow or visual guidance

>> APPLYING PREDEFINED RESOLUTION STEPS to common problems in the network and application infrastructure

>> PROVIDING STANDARDIZED INTEGRATION POINTS to critical systems, such as provisioning, order entry, billing and monitoring, and developing automated processes for the exchange of data among these systems

Tools For The JobMost vendors looking to make a mark in autonomic computing are targeting large, technology-dependent organizations, typically between the "Reactive" and "Proactive" stages. These enterprises have substantial investments in network- and application-management tools, yet their operations teams still rely on highly manual processes to manage incidents, configuration and changes to the environment.

More important, it often takes minutes or hours to determine how a change or event affects customers and how to prioritize tasks based on existing service levels and business needs. For these organizations, automated computing sounds more like a science fiction movie than a reality they could implement.

Not so. IBM has developed an Autonomic Computing Toolkit specifically for problem resolution and is working to integrate autonomic capabilities across its entire product line. CA and HP also are looking to get in this game. HP's Adaptive Enterprise focuses on deploying an application infrastructure with an emphasis on Web services integration and interaction with other HP products across the enterprise. With its Unicenter Enterprise Job Management, CA visualizes a cross-platform job-management engine and agents, to support the scheduling of processes on any application environment. Microsoft's series of business-management applications, called Microsoft Dynamics, is aimed at enabling businesses to make better decisions based on system data.

For companies that aren't quite ready to buy in to a large-scale architecture, smaller software vendors have emerged over the past two years looking to address the daily challenges of operations centers. Products from iConclude, Opalis and RealOps, for example, offer broad application problem resolution and optimize network- and security-management functions.

Elements of an Autonomic Infrastructure Click to enlarge in another window

The common thread across all the major frameworks is creating a seamless, integrated IT environment so that data can be gathered across information silos. The architectural concepts behind lower-level applications are similar as well. They're focused on problem resolution, so they typically reside between the monitoring software and infrastructure. All build on existing investments in system monitoring tools and service-desk applications. This is why they'll do little to help organizations at the "Reactive" stage of maturity.

After IT has defined the specific workflow desired, the applications execute procedures for each incident. IT is then alerted in an automated way or through a visually guided process. Procedures may include acknowledging alerts, creating new trouble tickets, performing specific predefined troubleshooting tests, performing repair actions when possible, and eventually closing tickets and clearing alerts.

If implemented correctly, these applications can lower production support staff costs, reduce errors and improve process completion time. More important, they move an organization to an enterprise-level view of the business. Online executive dashboards, for example, when implemented in the context of key-process viewpoints, provide focused, actionable views of real-time service-assurance information.

As in the operations management industry, we expect to see a great deal of innovation and consolidation in the next several years as the large software companies with stakes in the autonomic computing market snap up smaller, innovative vendors.

But it's worth making a move: IT executives leading automation initiatives will become agents of change within their organizations. The vision is profound, and when realized, will have a dramatic impact on IT groups and the business by allowing people to focus on the core business--not the escalating cost of managing complexity. n

Lay The Telegraph LineAutomating typically labor-intensivE processes is all well and good, but real and lasting benefits will prove elusive if underlying service processes are inefficient or downright dysfunctional.

A successful automation strategy encompasses not only the technical aspects of the infrastructure, but the operational processes involved in delivering service, such as change, configuration, incident, release and problem management. Organizations must roll out operational processes and workflows in concert with automation technology. To make that happen, IT must have an understanding of business drivers and requirements.

When choosing software, ask the following:

» Is my organization ready? Do we have the underlying applications in place to automate our processes, or do we have gaps we should fill, for example in performance or configuration management? Master these before you look to cross-platform application integration and problem resolution.

» What type of interface does the software provide to develop and modify my processes? You don't want your staff to have to learn a complex language to build or manage processes. If the interface does not enable easy construction and modification of workflows, management will suffer.» What prebuilt connections are available for my applications? Develop a checklist of apps that must communicate and be sure the automation framework has prebuilt connections. If a connection is not available, will the vendor develop and maintain it for you?

» Do you control all of the applications that must connect to the automated computing environment? Often, organizations have devolved into silos. What if the group that controls the service-desk application won't let you connect it with the automation software?

» How does the automation software scale? Has the vendor demonstrated the product's ability to scale both in volume or processes, as well as to connect to multiple apps? If you have a distributed data center, how will the software deal with multiple centers and distributed apps?

» Does the automation software include many prebuilt processes for problem resolution? While these can be a great start, how much will you need to customize these procedures to meet your specific business rules and policies? How much support will the vendor provide for customized processes?

» Can you accomplish some automation tasks using products or tools you own? Many network management applications now include workflow engines with prebuilt connections, for example. Instead of investing in a new tool, you may be able to expand one of your existing products to develop your own workflow and see an immediate return.

Bottom line, IT managers must ensure that they don't get further into a morass. Remember, the goal of automating systems is to reduce complexity, not increase it by requiring staff to learn new tools and programming languages and adding steps to routine maintenance tasks.

Sleeping Beauty: Although some automation functions may not seem complex, implementing them can have a dramatic effect on the IT staff. At one organization, a system administrator had to wake up at 1 a.m. daily to remotely start the backup. Now the company uses Opalis Integration Server to automate that function and take corrective action if any of the processes fail.

Nab Runaway Costs: Transporeon, a European e-logistics solution provider, used iConclude's OpsForce to develop more than a dozen process flows to help with internal repetitive IT tasks, such as adding user accounts, performing backups and creating HTML maintenance reports. OpsForce improved productivity and reduced overall maintenance costs by more than 40 percent. As a result, key resources now help drive faster response times and increased customer satisfaction.

Michael Biddick is vice president of solutions for Windward Consulting Group, a systems integration firm that helps organizations improve operational efficiency. Write to him at [email protected].

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights