Painless (Well, Almost) Patch Management Procedures

Are you proficient in patch management? With malicious threats at an all time high, you'd be crazy not to have patching procedures in place. Learn how to implement

March 26, 2004

14 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Perils of Patching

The lead bogeymen in the patch-management nightmare are patch volumes and frequency, resource availability and operational impact.

On Microsoft platforms alone, there were 51 advisories released in 2003, and the frequency got as high as several critical flaws per week. On a more macro level, our Security Threat Watch newsletter reported more than 1,040 vulnerabilities in 2003.

In a bid to curb growing enterprise discontent, Microsoft moved this year to a seemingly more consistent patch-release cycle--its "second Tuesday of the month" strategy (see "The Microsoft Patch Trick," for our take). However, many organizations are less concerned about the timing than they are about the numbers and impact.

"Because of the volume and impact, the release cycle is becoming less relevant," says George Collins, head of security for a large Midwest manufacturing and automation company. "There are more applications that become finicky and fickle around the patches, and while we still need the patches, patching is a nightmare no matter how you deal with it."To make matters worse, the interval between a vulnerability's discovery and the creation of exploit code is shrinking (see "Shrinking Time Lines or Increasing Urgency?," page 42) narrowing the wiggle room organizations have to get their patching done. "I wish companies like Microsoft would put some more time into looking at their code before release and stop assuming that the businesses are going to debug it for them," Collins says.

Microsoft isn't the sole offender, of course--members of the vulnerability research community continue to discover devastating software flaws in Cisco Systems, Sun Microsystems, Linux and other mainstream platforms. The entire software industry faces some serious challenges.

Beyond frequency, companies wrestle with operational difficulties. For starters, system administrators struggle with the testing requirements of new patches. Will the patches trample custom applications and turnkey solutions? Will they break dependencies the vendor didn't foresee? In our poll of more than 600 readers, nearly half identified having the time and resources to test patches as a big obstacle to timely deployment.

Operational teams must patch a growing number of systems and applications more frequently and in less time--with the same resources, if not fewer.

IT Minute: Patch ManagementTune in for a behind-the-scenes discussion with Tony Arendt of ourNeohapsis partner labs about his patch-management review. Get theintimate details behind his testing and product comparisons.

Thickening the plot is the fact that patches and hot fixes commonly have unforeseen side effects. We've witnessed firsthand some of our own file servers become unusable after applying specific patches, and our readers have even more horror stories. For example, Collins points out some of the challenges manufacturing companies face: "The underlying OS on many of these manufacturing systems is often Windows NT or Windows 2000. Patching becomes a problem, as the companies are typically running very custom and critical applications, and the need to keep those systems running often outweighs the need to patch them--if you break them, you break the ability to make money. The result is often three to four months of finger-pointing between vendors to verify things, and by the time we're ready to patch without breaking anything, there are more patches!"

To be fair, vendors are also in a tough position: On the one hand, they're pressed to release patches before vulnerabilities are leaked to would-be attackers. On the other hand, they're expected to perform heavy regression testing to ensure patches won't cause outages. Clearly, timeliness and thoroughness are conflicting goals, putting even the most mature software companies in a bind.

Another operational challenge is prioritization. Many organizations lack the resources to identify and classify the critical systems that need timely patching. While the idea of not knowing what's on your network may be amusing to IT personnel in small shops, flying blind is a reality for many enterprises. One piece of this "networks of the unknown" puzzle is technology-based--many organizations have incomplete, underutilized or functionally inadequate asset-management systems, so IT teams have no hope of identifying what's placed in and removed from their networks.

Another component is leadership: If CXO-level management doesn't have a clear policy on the need to identify network-attached assets, plus the willingness and authority to enforce such a policy, no amount of technology will save the network. Without the marriage of technology and policy, large organizations have little hope of knowing what to patch, much less doing so in a timely manner.With shrinking timelines, growing threats, static or decreasing resources, and increasingly complex technologies, a career in rocket science might seem a simpler alternative to IT. However, consider these tips before brushing up on your calculus:

  • Agree on a patching policy and timeline. Many organizations have truckloads of policy documentation, but they fail to make one crucial point up front: We will patch our systems in a timely manner.So why isn't patching high on all organizations' priority lists? Because IT is in the business of availability, and taking servers down for patching is perceived as conflicting with that mission. But security is clearly a subset of reliability--a point that's rapidly being driven home.

    More important than formulating a patching policy is having it approved and ratified across the entire organization, not just by ivory-tower infosec folks. While security and operational teams must agree on realistic patching timelines, their window of opportunity is now less than 30 days. Lest there be any doubt, consider the appearance of Blaster and Welchia in the second week of August 2003. The worms exploited the MS03-026 vulnerability, first announced in the third week of July 2003. We expect the time horizon to narrow even further this year.

  • Employ automated technology. Patching sneakernet-style may be viable for small shops, but try that on thousands of hosts and you'll soon have holes in your Chuck Taylors. Fortunately, there are a growing number of automated options: desktop-management suites, update-aware operating systems and specialized patch-management solutions all promise to help with this problem. We review five of the latter in "Patchwork Protection," (page 45) but regardless of which technology works best for your environment, take the time to pick one.

    In addition to enlisting technology, some small organizations turn to their end users for help. Dale Singleton, director of systems administration at the not-for-profit Anixter Center in Chicago, put his users to work by configuring Microsoft's Windows Updater service to automatically download patches to workstations and prompt users to install those patches.

    Shrinking time lines or Increasing Urgency?click to enlarge

    "We have more than 400 users of varying skill levels and only three people on the entire IT team; we have little choice," Anixter's Singleton says. "We've tried to educate the users that it's important that they update. There's simply no way we can police all these people, so we need their help in keeping our environment secure."Although end-user participation can create its own set of challenges, it beats the alternative. "Sure, we're afraid to do it," Singleton says, "but we're more afraid not to."

  • Know what's on your network. You can't patch what you don't know about. Organizations that have a management-backed asset-identification process and an equally strong policy on quarantining/disconnecting unknown assets are more likely to stay safe.

    Organizations must use technology to identify what's actually running on their networks. Conventional asset-management systems can help, but active probes and traffic-analysis engines that work with live network traffic will always stand a better chance of finding rogue systems, networks and devices.

    Vulnerability-assessment scanners (see "VA Scanners Pinpoint Your Weak Spots,"), traffic-anomaly engines (see "Know Thy Enemy,"), second-generation tools such as Sourcefire's RNA and Tenable Network Security's NeVO (Network Vulnerability Observer), and even open-source security tools like Argus, Nmap, Nessus and Snort can help here.

  • Prioritize deployment. If knowing what's on your network is a critical first step, prioritizing the systems that need to be patched isn't far behind. In an ideal world, we'd be able to patch everything immediately with zero downtime, but we suspect that day will be here about the same time as we get bug-free software. Classifying critical assets and prioritizing the order in which they'll be patched may make the difference between inconvenient downtime for a few workers and an enterprisewide operations failure.

  • Buy time and protection with tiered defenses. A lot of lip service has been paid to the "defense in depth" strategy, but few organizations truly have multiple tiers of defenses (see "Secure to the Core,").

    Are your firewalls keeping out all the bad guys? Are your antivirus products stopping all the hostile code before it breaches the perimeter? Are your patching mechanisms bulletproof? If your primary control fails, is there a secondary?

    Fortunately, some evolving security technologies can add depth. For example, host intrusion-prevention systems can reduce the impact of some vulnerabilities (see "HIP Check,"). Endpoint-protection suites from vendors like Sygate and Zone Labs (which Check Point Software is acquiring) can plug some common holes, and network intrusion-prevention systems can block some attack types when carefully implemented.

    Note that we said some. None of these technologies will render you bulletproof, and the frequently misleading marketing messages surrounding many of these products concern us. For example, anything claiming to alleviate the need for patching is dangerous. Still, it's becoming obvious that the smart, tactical use of tools like network-intrusion prevention technology can buy organizations some breathing room when timely patching of critical systems just isn't feasible.

  • Have backup plans. What if all else fails and you're facing critical exposures, potentially crippling data leaks or worm outbreaks? Can your organization afford to sustain a partial system or network outage? Is your team authorized to disconnect sensitive parts of your computing environment? If not, who makes that call? Can your infrastructure and operations personnel effectively quarantine rogue systems and network segments in an acceptable time frame?Having a Plan B (and C, D and E) in place can help when things get ugly. Planning and drills will prove their worth during a storm, when your head isn't so clear.When Welchia.C broke out in February, it took advantage of the same hole as its predecessors bearing the same name (the MS03-026 DCOM hole), but it added a new twist: three other vulnerabilities, including MS03-007 (WebDav). Targeting the WebDav hole turned out to be a good call by the worm's author in that many organizations didn't get around to patching it. The result? Almost eight months after the vulnerability announcement, those who hadn't applied the WebDav patch got a rude wake-up call from a four-headed beast.

    The worm phenomenon has definitely helped the IT community push the patching agenda, but some fear (and we agree) that a "worm chaser" mentality has taken the emphasis away from the real goal: being less vulnerable regardless of the latest fashionable threat or attack vector.

    Part of the patching game is about eliminating worm outbreaks and the operational chaos that can ensue, but think about it: Your focus should be on eliminating the primary attack vector for bad things, period. Regardless of where your paranoia level registers on the conspiracy-theory meter, one thing is clear: Worm-related traffic provides a great cover for nefarious activity--the cyber equivalent of a blanketing smoke screen.

    For example, can we realistically expect operational-security staff to differentiate between actual worm IDS alerts and traffic and exploit attempts by skilled attackers made to look like worm traffic? If we can't keep relatively brainless automated tools from ripping our networks apart, what kind of damage could a skilled human do? Or, to take a more paranoid stance, what have they been doing?

    Although Welchia.C made relatively little noise compared with its more heavy-hitting counterparts CodeRed, Slammer and Nimda, it showed signs of more to come: the exploitation of vulnerabilities and attack vectors in combination, over multiple protocols like HTTP and NetBIOS. Heck, we already have worms that are leveraging the remnants of other worms and automatically patching our holes for us, while leaving behind nasty payloads. We're also seeing automated tools that create networks of spam relays using wormlike techniques.Don't count on the worm threat going away anytime soon. If anything, it will probably get worse this year. In a sense, we've been lucky so far--none of the worms we've faced has been nearly as destructive as it could have been.

    Long-Term Action

    Organizations must start factoring patching frequencies into their TCO (total cost of ownership) studies. It doesn't take an MBA with a crystal ball to see that products requiring less patching will eventually have a fiscal advantage in the marketplace. Unfortunately, vendors haven't yet gotten that message and won't until enterprise consumers begin to factor patching costs into product-purchasing cycles.

    Meanwhile, whether your organization decides to leverage its desktop-management suite, deploy a patch-management system or rely on its user base for help in the battle, you must develop a unified strategy for patching, as well as find, document and understand your remaining vulnerabilities. The alternative is outages, data loss, the unknown and an increasingly expanding risk profile.

    GREG SHIPLEY is the CTO for Chicago-based security consultancy and testing lab Neohapsis. Write to him at g[email protected].

  • "Deciphering the Business of Security"If you need patching, automation is the way to go. Indeed, the financial case is easy to make.

    Consider the basic cost model: cost = (employee labor hour) x (hourly rate) x (number of systems).

    Organizations that have established internal charge-back systems can readily measure the savings of deploying an automated patching method once they've done some basic time analysis. But even if your organization doesn't have a ratified charge-back system, performing some basic calculations on FTE (full-time equivalent) employees can usually render a ballpark hourly rate.ROI calculations are cut-and-dry when one talks about contracted labor associated with patching, because the cost savings are direct and tangible. The savings associated with an employee are often less direct, since employees are typically reassigned to other tasks, as opposed to having their positions removed. Regardless, it's clear that less time spent doing mundane tasks will save your organization time--and, as the saying goes, time is money.As the window between vulnerability release and exploits entering the wild slams shut, patching is taking on new importance. In this cover package, we present some items for your consideration. For example:

  • Is your asset-identification system up to par? You can't protect what you can't find.

  • Is management willing to accept some downtime in exchange for safety? if so, your organization should draft a policy defining the appropriate time frame for getting patches installed.

  • Have you separated the wheat from the chaff? It's vital to prioritize where limited patching resources go.

    As for automated patch management, the question isn't whether to implement it, but how. Either a desktop-management system or a dedicated patch manager can do the job.In "Patchwork Protection", we tested five specialized patch managers. Some key considerations were whether the products needed agents (some did, some didn't, but we don't see what all the fuss is about) and rollback capabilities. Here, we do see the merits of making a fuss: If you've ever deployed a patch that blew up your network, we're sure you'll agree. Only two products, Ecora Software's Patch Manager 3.0 and PatchLink's Update 5.0, offered rollback. Of these, Ecora won our Editor's Choice award, thanks to its intuitive interface.


    Tune in for a behind-the-scenes discussion with Tony Arendt of ourNeohapsis partner labs about his patch-management review. Get theintimate details behind his testing and product comparisons.

Read more about:

2004
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights