The Survivor's Guide to 2004: Network and Systems Management
NSM products are improving, slowly but steadily, and we're seeing standards gains to boot.
December 19, 2003
But the frameworks do emphasize one thing correctly: the total management cost of network ownership. Today, while the towering cost of network and systems management frameworks is obvious, the hidden, below-the-surface "soft dollars" spent to maintain many enterprise networks go uncounted, often unseen. These dollars represent real people doing real procedures in support of real network services, but they're not counted directly as network and systems management costs. Thus enterprises are often reluctant to purchase network and systems management systems because their cost seems out of whack with their value. But it's not. Consider how much time and effort would be saved as a result of a framework deployment.
Zoom In
But that's the big picture. Back on Earth, we need more practical answers. network and systems management problems need to be fixed, now, with as little cost and effort as possible. This reality means much network and systems management application development is focused on the low-hanging performance-management fruit (read: network-performance data).
Lately, though, this has become too much of a good thing. Performance vendors pluck data from everywhere. The ubiquitous deployment of SNMP in network and system devices has created a huge harvest of information, and products from BMC Software, Concord Communications, Entuity, ProactiveNet and others collect performance information from a very long and growing list of proprietary agents. These include Computer Associates system agents, Cisco SAA (Service Assurance Agent), and PeopleSoft and Oracle application transactions.
Some key trends to help you handle this glut in 2004:
• Dynamic thresholds: When it comes to selecting the right thresholds to trigger alerts, one size does not fit all networks. Recognizing that, better performance-management products seek to set thresholds that represent the norm for each particular network. This is key because static thresholds mean a flood of false warnings, and like the cries of the boy who cried wolf, critical notifications then cease to be effective operational diagnostic triggers.
Two companies creating such dynamic thresholds are ProactiveNet and Panacya. Both vendors' products watch existing traffic and set high and low values based on differences seen over a range of times. For example, traffic on a particular interface or for a specific service might seem unusually high, but is it high every month on the day after accounting closes the books? This is the kind of sensitivity that will let you see the threshold in context of what's normal for your network.
• Granular reports: SeaNet Technologies is cutting through data glut with granular data and smart rollup features. Using probes and gathering all data flows, its SeaView adds to the usual min/max/avg rollups data for every TCP transaction and variances for each, which are lost when data is averaged.• Packet shaping: Packet-shaping vendors, including Allot Communications and Packeteer, also are seeking to stem the data overrun. On wide-area links--and especially at the Internet border--their products' ability to explicitly allow traffic based on application can conserve bandwidth while knocking out performance bottlenecks and security threats. For example, packet monitoring at the application layer can catch worms and other nastiness trying to cross the appliance into your network.
This is a rare inclusion of security management into the FCAPS model, one we'd like to see more of. Here's another example of how the twain can meet: The basic functionality of conventional fault-management tools closely resembles that of SIM (security-information management) tools. Products from Arbor Networks and other vendors that characterize traffic for intrusion detection also characterize route flow, a network-management function.
Unfortunately, 2004 will not see a bridging of the crevasse between security-management and network-management functions. Even though FCAPS management nirvana would integrate the distinct groups of people administering networks and security, the reality is that IT management likes keeping access and privacy control in the hands of a few, and security vendors exacerbate this division. We hope vendors will start to merge these two, improving both in the process. A push from the enterprise asking for focused fault correlation could drive this union.The SNMPv3 and SNMPconf standards were refined during 2003, and adoption seems likely for 2004. SNMPv3 is poised to implement Diff-Hillman key exchange, making it possible to have a private, secure management channel. And SNMPconf, now a standard, has the Diff-Serv MIB poised to become a draft specifying the first multivendor standard to configure network QoS applications.
The IEEE isn't sitting on its hands, either. It's busy chiseling out a Layer 2 topology standard, 802.1ab, that will regulate the way in which network and systems management products attempt to piece together networks. Not only will this improve network documentation a hundredfold, it offers real hope that root-cause analysis will improve in network and systems management systems.
On the old-friend product front, both Aprisma and Hewlett-Packard are bringing out new versions of their veteran network-management applications, Spectrum and Network Node Manager, respectively. These stalwarts have been around more than 10 years but continue to innovate.Aprisma, now successfully on the other side of its spin-out, has begun to make money for parent company Gores Technology Group. In a testament to the faith Gores has in Aprisma, Spectrum will be upgraded in the new year. In addition to the redesigned OneClick Java interface, Aprisma has added customers and profit throughout 2003.
HP, also fresh from a public, gut-wrenching organizational overhaul, plans to roll out a new version of its Network Node Manager at the start of 2004. Version 7.0 promises a more stable Java interface as well as improvements to server-side Layer 2 topologies and fault correlation.
If you're one of the many who shudder at the thought of relying on slow and poorly functioning Java applications, worry less. They are improving, especially when delivered as an application rather than an applet.
Just as significant as the features and functionality that both Aprisma and HP are bringing to their new versions is the signal that reliable, longtime management vendors are finding the revenue to continue their product improvements.
SimulationOne function often left out of the network and systems management fold is simulation. This area usually is reserved for military and government entities and larger service providers, but vendors are looking to help enterprises try out network designs before building. Opnet Technologies, for example, is adding functionality to its IT Guru product to ease the task of setting up simulations, by gathering and learning existing traffic patterns from the network. Making it easier to create the initial setup of network simulation will put this tool within the technical grasp of more IT shops.Utility-computing mania is spreading like peanut butter and jelly at a kindergarten play date. What's not to like? Ratcheted up by IBM's and HP's utility-computing initiatives--On Demand and Adaptive Computing, respectively--the market hype for this no-human-intervention IT model is over the top (see "Utility Computing: Have You Got Religion?").
The downside: Someone has to transfer the lessons learned to the computers so that automated response is possible. Of course, IBM, HP and plenty of other management vendors have service groups ready, willing and able to jump in and utility you and your budget to death.
When you get right down to it, the root of the utility-computing vision is the reality of understanding your network and having policies to manage it. This doesn't have anything to do with computing at all: It's good old-fashioned organization. It's planning and procedures. It's all the stuff that IT has done for years to manage what goes in and out of production. Boring and more boring, but essential.
It's no surprise that Cisco Systems, BEA Systems and Computer Associates have utility-computing strategies that are integrated into either or both of HP's and IBM's visions. But smaller vendors have also jumped on the bandwagon. Singlestep Technologies, a promising start-up born out of the music business (it designed controls for light and sound boards for big-hair rock bands in the '80s), has garnered much attention, notably from Ipswitch and IBM. Singlestep's Unity platform is a quick-development application IBM thinks will fill the gap between the automation in its Tivoli management products and the practicality of what operators have to do. Unity will automate data gathering at the time of a failure by quickly creating an application that matches events to actions. This might be as simple as logging on to a network device and pulling the existing configuration and interface status when a downstream device registers a failure. Such functionality could guarantee timely information and free operators from having to drop what they're doing to assure best-practice compliance.
It may seem that big-vendor utility computing is a beacon of automated control for IT departments in search of network and systems management simplification. But beware of below-the-surface complexity--think Titanic versus iceberg.Here are some areas where utility computing will help in 2004, according to technology researcher Summit Strategies:
• Business and business process guidance, to help companies define their business objectives and the processes best suited to achieving them;
• IT architectural, integration and process services, to help companies adapt their IT environments to be more responsive to business needs and accommodate dynamic computing principles;
• Virtualized, pooled IT infrastructures that improve the cost-effectiveness, availability and flexibility of IT environments;
• Automated, policy-based management tools and standardized, repeatable, best-practices management processes that provide comprehensive management from individual IT elements, up through the business-process level;• Flexible multisourced deployment options, ranging from internal deployment and management to complete outsourcing and all forms of hybrid arrangements; and
• A range of flexible financing options, letting customers do everything from paying for all IT resources and management up front to paying by the individual user or transaction.Eighty-three percent of MSPs (managed-service providers) said their revenue grew by more than 54 percent in 2003, with the number of customers served jumping, on average, from 500 to 700, according to an October joint MSPAlliance and ThinkStrategies study that focused on the managed-services and IT-outsourcing industries. In addition, these providers said contract renewal rates are above 90 percent, with nearly 65 percent of customers buying additional services.
All this good news for the MSPs makes it look like many of us are turning to outsourcing to get network and systems management done ... and we are. But remember, the survey is done by and for the select group of MSPs in the MSPAlliance, and it does butter their bread to show things in a good light.
Still, for most companies, network management is only a cost, so moving management to outsourcers makes sense. In addition, contract lengths and up-front costs have been dropping, making it easier to move to an MSP model.
It may seem like utility computing will undercut this MSP advance by making it possible for enterprises to simplify network management, but that's not likely inasmuch as one important goal of utility computing is creation of procedures and policy. And of course, for an MSP, this isn't anything new, as most MSPs attempt to define service deliverables for clients to help manage the relationship. As enterprises adopt utility computing, their internal service definitions and policies will help the MSPs, lowering their cost-to-manage by removing the need to create a definition to manage to. So though utility computing may increase competitive pressures on MSPs, it's not likely to change their role, simply because they are a management utility now.Bruce Boardman, executive editor of Network Computing, tests and writes about network and systems management. He has 12 years' experience managing networks and distributed computing for a financial service provider. Write to him at [email protected].
Post a comment or question on this story.
Aprisma: Aprisma emerged from its Cabletron legacy with improvements, and its Spectrum line continues to innovate.
Hewlett-Packard: Version 7.0 of Network Node Manager, the stalwart NSM platform, adds more base root-cause functionality and continues to advance HP's management strategy.
IBM: Utility computing is on everyone's lips thanks to IBM. Big Blue also has some of the most interesting desktop-management software going.
Opnet Technologies: Opnet is making network simulation possible for more than just the government and military by making it easier to do.ProactiveNet: Brings together more data sources and improvements in data mining.
Panacya: Panacya's work on smart thresholds that self-regulate is bringing new sensitivity to alerts.
Packeteer: Packeteer continues to position itself for new management strategies through deep application awareness.
SeaNet Technologies: SeaNet moves the understanding of network usage and experience forward through transaction tracking.
Singlestep: Singlestep takes a practical step toward utility computing.• "Utility Computing: Vendor Doctrines"
• "Route Optimization: Route Optimizers Put You in the Driver's Seat"
• "Layer 2 Layout: Layer 2 Discovery Digs Deep"
• "Playbook: Staying One Step Ahead of Performance"
• "CA Puts Muscle Behind On-Demand Computing Plans With Four New Products"
Read more about:
2003You May Also Like