Tactical Security 101

Taking a scattershot approach to information security can be worse than doing nothing. Address the critical areas we detail here to ensure the safety of your assets.

January 20, 2003

18 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Similar but more mature practice areas have adopted different measurement standards. For example, corporate security/financial fraud units frequently measure their effectiveness by comparing audited loss statistics to industry baselines. If their losses are greater than industry baselines, they are doing poorly; if losses are lower, they are performing above average. Although the infosec industry lacks such data, history and methodology, it's clear that smart spending can reduce losses--and, conversely, negligence can cost you big.

Getting a Game Plan

You have to create a security road map centered on policy definition and asset identification before making any major technology investments. Those lacking strong policies should consider hiring a consultant or jump-starting the effort with security-template tools like NetIQ's Vigilent Policy Center (see "Policy Management Hits the Web").

Once you've laid out the basics, determine how far you are from policy compliance and baselines, and where you come up short in terms of access control. Tactical technology solutions can help here, but only if applied in the right order, for the right reasons. For example, host-based intrusion-detection systems do little good if the hosts on which the HIDS agents reside are unpatched and open to compromise. The alarm rates will be constant and the hosts vulnerable, effectively rendering the HIDS worthless. In this scenario, money and time would be better spent solidifying patch management.

You probably face political and organizational challenges as well. For example, many organizations have learned that without antivirus systems, they'll chase faceless demons indefinitely. Antivirus becomes a "must have"--its operators are clear, and the decision on the technology is simple.When considering firewalls and inline NIPS (network-intrusion-prevention system) products, however, roles and responsibilities come into play. An organization with a centralized operational security unit, for example, will probably have the IDS (which normally sits offline) and firewall administrators on the same team. So, the decision to implement an inline NIPS is a no-brainer.

However, if the NIPS administrators are part of an infosec unit outside IT, putting what would normally be a passive device (an IDS) into a production role (inline with the firewalls) may blur responsibilities. Who operates the NIPS? Who troubleshoots network outages? Do the security staffers lose control of the NIPS or gain control of the firewalls? Roles and responsibilities can become bigger factors than the technology.

Thus, before embarking on any major security technology purchase, organizations must ask a few basic questions:

• What asset does this technology protect?

• How effective is it?• What's its operational impact?

• Do we have the resources to manage it?

• Will it work with, or against, other security controls?

Once assets are identified and these questions are answered, you can start to prioritize. Without a tiered defense strategy, organizations face few controls between critical digital assets and threats. Various security technologies are a must; the challenge becomes choosing and implementing them.One critical but often overlooked area is the management of known application and operating-system vulnerabilities--vulnerabilities that have been publicized and, frequently, have fixes yet remain unmitigated. Stay on top of patch management and vulnerability-assessment scanning. Vulnerability-assessment tools help protect system and infrastructure assets and complement just about every other security technology. An effective vulnerability-assessment/patch-management effort will reduce operational risks for everyone.

Many vulnerabilities have bitten organizations multiple times in the form of both manual and automated attacks. For example, two of 2001's most vicious worms, Code Red and Nimbda, leveraged known vulnerabilities that had patches available. If more organizations had fixed these vulnerabilities in a timely manner, neither outbreak would have been as severe.Known holes are also one of the top attack routes. According to the PricewaterhouseCoopers/InformationWeek 2002 Global Information Security Survey, "Exploited Known OS Vulnerability" tops the charts as a method of attack with 41 percent (that's up 10 percentage points from 2001; see chart below). In the cases where the cause of the compromise can be determined, more than 99 percent of those are from known types of attacks where countermeasures are available, according to Jeff Carpenter, manager of the CERT Coordination Center, part of the Software Engineering Institute.

Shrinking Toolbox

Much to our surprise, given this reality, many popular scanning tools in this market have been neglected or discontinued. In 2002 both Cisco Systems and Network Associates dropped their vulnerability-assessment solutions (NetSonar and CyberCop Scanner), and Internet Security Systems gave Internet Scanner its first major revision in years. Considering that Internet Security Systems and Network Associates were market-share leaders, and that the vulnerability-assessment market is growing, it should come as no surprise that newcomers have entered the breach. Next-generation scanning solutions like Qualys' QualysGuard, Foundstone's FoundScan, nCircle Network Security's IP360 and Tenable Network Security's Lightning are looking to fill the void.

Some of these vendors started off with a perimeter-centric view of the world, but they are now shipping products aimed at enterprise-level internal vulnerability assessments. Consider speed, depth of coverage and signature checks, reporting, trending and scalability when choosing your vulnerability-assessment products.

Once vulnerabilities are identified, administrators need to snap into action with patches and hot fixes. Patch-management solutions from PatchLink Corp., BigFix, St. Bernard Software and others offer an efficient way to address mitigation tasks (see "PatchLink Helps Keep Windows Closed"). While large organizations will always need plain-vanilla vulnerability-assessment scanners for their audit teams, we hope to see tighter integration between vulnerability-assessment and patch-management solutions in 2003.One caveat: Neither of these product types helps with security flaws in custom applications. Some tools, such as Cenzic's HailStorm and @Stake's WebProxy, can help highly technical auditors look for programming and design mistakes, but there's no simple solution for vulnerabilities in custom applications. Think about it--over the years Microsoft has come under siege for code flaws that created nightmarish security problems. If there were an automated way to detect these bugs, wouldn't the world's largest software company use it? Until application-development practices evolve and security enters the design life cycle, we'll have problems in this area. Gone are the days when network administrators had to beg for firewalls. The firewall market is the most mature in the security industry, dating back to the mid-1990s. Firewall technology basics are well understood--even by upper management.

However, as mature as the products may be, a number of dynamics bear watching in 2003. First, our recent poll of 90 readers on security suggests that organizations are still making firewall changes. Firewall deployments/replacements ranked second only to NIDS (network-based IDS) and spam-filter deployments (see chart at right).

Second, it will be interesting to see if vendors can meet gigabit and multigigabit requirements--particularly at the core. Many of the industry's leading firewalls rely on mainstream hardware (SPARC and Intel), and we're not sure whether those architectures can provide enough power to push firewalls to, and beyond, the gigabit barrier.

Third, integration between technology types (anti-DoS, IDS and traditional infrastructure, for example) will bring new options to the enterprise. We expect firewalls to gain features, and market consolidation to continue.

What does this all mean for the enterprise? For starters, smart organizations will start looking to manage their firewall deployments more effectively. Firewalls typically serve as good network access control devices and can help protect host and infrastructure assets. However, they are often ineffective when it comes to host protection--too many operating-system and application vulnerabilities sail right past them. Moving forward, you'll need to ensure that your organization's critical assets are protected by firewalls and more asset-centric controls, such as HIPS (host-based IPS) and encryption suites, where appropriate.While some organizations have critical assets in DMZs (demilitarized zones) and other perimeter points, many have critical systems at the core of their enterprise. Smart firewall placement can help both internal- and external-facing assets, and "endpoint" protection solutions from companies such as Sygate Technologies, Zone Labs, Secure Computing and 3Com are worth investigating. As the number of deployed firewalls increases, so do operational headaches and administrative overhead. Check Point Software Technologies has dominated on this front because its platform offers one of the few truly scalable management frameworks. However, as NetScreen Technologies and other competitors get their management acts together, Check Point will have to fight for continued dominance. Regardless, organizations considering enterprise-level firewall implementations should scrutinize the management framework. Pilot error is still a big problem in firewall administration, and a clear interface can help reduce mistakes.

Rapid quarantining and containment is crucial for large-scale, multinational corporations that need to combat worm outbreaks. For example, in 2001 a number of the Fortune 500 took serious hits when Nimbda rapidly infected key Web and mail servers and wreaked havoc within organizations that had "mushy" centers.

In fact, in 2002 we even saw worms take down PBXs, evidence that some of the industry's leading voicemail systems have vulnerable Sun Solaris and Microsoft Windows operating systems running under the hood. Smart organizations were able to contain outbreaks, however, with detection and quarantine processes. Although many of these manual quarantine efforts used simple router ACLs (access-control lists), strategically placed firewalls with unified management frameworks could have made the process even more efficient. And for those that didn't have choke points in place, these deployments could have made the difference between safety and six- to seven-figure losses.

Rapid-response capabilities are a combination of technology and process: Organizations with response procedures, timely access to router and firewall reprogramming capabilities, and the ability to tune their Web caching engines saved hundreds of thousands of dollars in downtime and repair costs.

The enterprise firewall market is dominated by Cisco and Check Point, according to Gartner, with NetScreen slowly gaining ground. Check Point packs in more options than a Japanese cell phone, butit will be interesting to see if Cisco and NetScreen start leveraging their integration plans to gain ground. NetScreen is looking to integrate its recently acquired OneSecure inline NIPS and normalization technology into its firewall line, and Cisco has begun putting firewall, VPN (virtual private networking) and IDS functionality into its core switching platforms.

Finally, organizations that are looking for more than just a strong front door may want to keep an eye on Intruvert Networks, TippingPoint Technologies, NetScreen and others that offer Layer 7 inspection and scrubbing features. While few will argue the security benefits of traditional Layer 7 application-proxy-based firewalls, the lack of clear development progress on many traditional proxy-based solutions, such as Secure Computing's SideWinder and Gauntlet (recently acquired from Network Associates) and Symantec's Raptor, has left many practitioners scratching their heads. Some of the "normalization" features found in OpenBSD have sparked interest, and products such as OneSecure's (now NetScreen's) IDP offer a curious blend of intrusion prevention and normalization features.We may see such "proxy killers" gain momentum in the coming months (for more on normalization, see www.aciri.org/vern/papers/norm-usenix-sec-01.pdf).Intrusion detection remains one of today's hottest areas, but while IDS technology is sexy and evolving rapidly, we believe it offers only a limited ROI, and only if it's deployed in a sane manner. Many IDS efforts fail because the overhead required to operate and monitor large-scale deployments is underestimated. Compounding the problem is that many organizations go right from testing to deployment, bypassing the pilot phase. The result is incomplete deployments and unmonitored event logs or sensors that fall horribly behind on signature updates. In addition, most NIDS products are reactive, making them less effective protection mechanisms.

While technology designed to complement IDS, such as Lancope's StealthWatch, is taking steps away from traditional signature-based solutions, most NIDS products are still plagued by false alerts and can overwhelm administrators. Put simply, large IDS deployments can present a significant and costly burden to their operators, serving as glorified burglar alarms. Unless the NIDS industry takes some gigantic steps forward in the near term, we caution against embarking on a large-scale NIDS deployment without a strategy for handling the associated overhead.

Many organizations are turning to event correlation to lessen some of the analysis load. We think this is a smart move: Not only do aggregation and correlation solutions solve the "Where should I send and store my logs?" problem, they can reduce the time it takes to analyze and act on security events. For example, if you were a security analyst, would you rather be presented with thousands of IDS alerts from an array of sensors, or a limited number of events based on IDS alerts cross-referenced with firewall entries, referenced against host/asset databases, followed by the confirmation that the attack types match existing vulnerabilities?

If you're like most, you'd rather be told about the 20 items that you should pay attention to, not the 10,000 items that may or may not be of concern. While the correlation market is even younger than the IDS one, it promises to bring relevancy to a sea of otherwise misleading data points. Unfortunately, these correlation solutions are hefty investments and often require resource-intensive deployment efforts.While the promise of HIDS has been based on an asset-centric approach, most HIDS offerings have not moved beyond log auditing and binary integrity checking. They are asset-based only in that they are located closer to most assets, on the host, as opposed to on the network. Fortunately, the HIPS (host-based IPS) market is picking up steam. Entercept Security Technologies, Okena and other companies have created solutions that can be configured to stop known and unknown attacks based on application behavioral profiles (see "HIP Check").This approach is complicated but comes with a much higher payoff: Attackers aren't just identified by the technology; they can be stopped by it. If your organization is going to go through the hassle of deploying host-based intrusion-detection/ prevention agents on production systems, consider a proactive HIPS solution.

So where does intrusion detection fit in your security strategy? Key to any effective control is the monitoring of that control. Traditional NIDS solutions can serve as watchdogs to help verify that network-based controls, such as firewalls, are effective, and they can serve as alert mechanisms for abnormal network activity. However, monitoring the effectiveness of controls is useful only if the controls have been deployed properly. Organizations that have not taken steps to identify assets, set policies and protection profiles, and execute on those policies should not be pursuing large IDS deployments. They should be taking care of the basics first. IDS efforts should never trump firewall, host lockdown, and vulnerability assessment and patching.

Looking Ahead

Organizations must first cover the nuts and bolts of security: defining policies, identifying critical assets, assigning roles and responsibilities, deploying network and host access-control mechanisms, implementing database controls, monitoring, deploying antivirus and hostile code protection mechanisms, and implementing selective use of encryption, training, patching and auditing.

However, there are a new few technology areas that have caught our attention.Network forensic (not to be confused with netForensics, the security information management provider) products, such as Sandstorm Enterprise's NetIntercept, help answer the question, "What happened?" after a network-based attack. These tools capture network traffic in its entirety and let administrators replay attacks, analyze transferred files and data, and put the pieces back together after a security event. While reactive, these solutions can shed light on what data is moving around on the network and what is leaving it.

Another intriguing nontraditional security product is SecureLogix's Enterprise Telephony Management, a firewall-like system for your telecommunications infrastructure that gives telco administrators many of the features found in traditional network firewalls, such as the ability to block inbound and outbound call numbers, call-type detection and real-time alerting, usage and frequency reporting. The product also helps address one problem that often flies under the infosec radar: war-dialing. ETM can detect attackers looking for open modem banks, making the product a multipurpose tool (see "Dial 1-800 Plug Holes").

Finally, spam has hit crisis proportions, so much so that it's become a security concern. Companies like Big Fish Communications are combating spam by taking a page out of the service provider and antivirus playbooks. By serving as the primary entry point for corporate mail, Big Fish's distributed network of mail systems uses a combination of heuristic, black-listing and pattern-matching technologies to create a robust filtering service. Roll in virus protection and redundancy, and Big Fish offers an attractive service.

Bottom line, there is no one-size-fits-all plan for prioritizing your security technology spending. However, understanding where your assets lie, what your weaknesses are and what various products can do for you will put you on the road to effectively deploying the right technology.

Greg shipley is the CTO for Chicago-based security consultancy Neohapsis. Write to him at [email protected]. Technology Area: Vulnerability-Assessment ScannersMarket Size: The intrusion-detection and vulnerability-assessment software market will reach $1.45 billion in 2006, according to IDC.

What's good: Products identify and report OS vulnerabilities

What's bad: Scalability issues on very large networks; custom application problems not addressed

Challenges: Scalability, timely updates, strong reporting, working with IDS and patch management solutions

Our call: Incredibly important technology space that has been ignored by most vendorsRelated Stories:

"Tipping the Scales" (Network Computing, Sept. 30, 2002)

"Control the Keys to the Kingdom" (Network Computing, Sept. 2, 2002)

"It's the Authentication, Stupid" (TechWeb, Sept. 4, 2002)

Technology Area: FirewallsMarket Size: Worldwide VPN and firewall hardware and software revenue hit $668 million in 3Q02, and is forecasted to reach $874 million by 3Q03, according to Infonetics Research

What's good: Mature market, known technology, good as a network ACL device

What's bad: Often incapable of blocking application-level attacks, provides false sense of security

Challenges: Large-scale management, moving up to Layer 7 at higher speeds, moving past the gigabit barrier while maintaining rich feature sets

Our call: While stable, market will continue to evolve in 2003Related Stories:

Hackers' Preferred Entry Point Is Tough To Close (InformationWeek, July 8, 2002)

"New Security Threats--Stronger Defenses" (Network Computing, May 13, 2002)

"Defense Mechanisms" (Network Computing, Nov. 12, 2001)

"Building an In-Depth Defense" (Network Computing, July 9, 2001)Technology Area: Network Intrusion Detection

Market Size: Worldwide IDS product revenue hit $94 million in 2Q02 and is expected to grow 42 percent, to $135 million, in 2Q03, according to Infonetics Research. Gartner, however, says that by year's end 2003, 90 percent of IDS deployments will fail when false positives are not reduced by 90 percent

What's good: Identifying attack patterns and providing administrators with further visibility into their networks

What's bad: High administrative overhead for a primarily reactive product space

Challenges: Providing relevant data, reducing administrative overhead, bypassing the gigabit barrier, reducing false alertsOur call: Second-generation products could turn the tables in 2003

Related Stories:

"Does Your Intrusion-Detection System Really Work?" (InternetWeek, Nov. 14, 2002)

"HIP Check" (Network Computing, Oct. 21, 2002)

"Connect the Dots" (Network Computing, April 1, 2002)Technology Area: Desktop and Server Antivirus

Market Size: Corporate demand for antivirus will account for 78 percent of the antivirus market by 2006 and the server-based antivirus market will grow from $508 million in 2001 to $1.5 billion in 2006, according to IDC.

What's good: Provides some protection against known hostile code

What's bad: Typically worthless against new attack methods

Challenge: Becoming less signature-centric and more proactiveOur call: The AV market should take notes from the Okenas and Entercepts of the world

Related Stories:

"Symantec Introduces Virus Protection For File, Cache Servers" (InformationWeek, Dec. 9, 2002) A few decades ago, in a mainframe world where big iron was king, AAA, strong passwords and a firm grasp of the access controls surrounding jobs and data sets were enough to survive. Unix was for scientists and academia, Linux wasn't even a pipe dream, and PCs were far from prime time. Centralized computing was the norm, and security techniques followed suit. For example, end-user protection strategies typically revolved around training employees to use strong passwords and then convincing them not to write said passwords on notes left next to their terminals. This model made sense: The mainframe stored critical data and applications; it was centralized and thus easily defensible.

Fast forward a few years. IP and IPX began to take hold, and LAN and WAN technologies started to converge. Distributed computing models gained ground, and many of the techniques pioneered in the mainframe world were applied to new operating systems such as Novell NetWare, Microsoft Windows NT and a smorgasbord of Unix derivatives. Top threats included password guessing, leveraging file-system deficiencies and exploiting system-trust relationships. The ease with which systems and networks could be built brought new challenges, but protection techniques still centered on file-access control, authentication and the occasional network restriction. Network-access controls in the form of router access-control lists and early firewalls added a few new tools into the mix.

Today, firewalls protect our perimeters, and intrusion-detection systems look for attack patterns. Our users face threats that include Trojan horses in the form of e-mail attachments, spyware, remote control software, cross-site scripting traps, hostile Web sites leveraging browser flaws, worms, viruses, VPN hijacking techniques ... the list goes on. Attackers can come from anywhere on the planet, using dozens of technology types. E-mail messages are filtered and scrubbed, Web pages are pumped through proxies, and it's not uncommon for a laptop to have three or more security-related programs running at any given time.But we're still having huge problems.

A single piece of data may reside on a desktop, in a tape library, on a file server or in a database. It may be accessible only through a single Windows application using a single file, or it might be viewable from across the globe using Web-enabled TN3270 emulation package traversing dozens of networks. It may require strong authentication if you're using normal channels, but leveraging the latest IIS problem or a recent Oracle vulnerability may grant carte blanche to the data underneath.

Many organizations have struggled to refocus their efforts; identify critical assets and potential targets; apply relevant technology to the right protection effort; and keep policies, process, and technology efforts inline. But make no mistake: Many of our efforts are based on the world we lived in 10 years ago, a world that no longer exists. Today, nothing is unbreakable.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights