Desktop Defense

Let's face it; it's not enough to just have antivirus software working to protect your network from intruders entering via users' machines. We'll share some low-cost ways to shore up

March 10, 2005

15 Min Read
NetworkComputing logo in a gray background | NetworkComputing

In the short term, forward-thinking organizations will look to combine conventional antivirus technology with supplemental, and often inexpensive, approaches: investigating network-layer controls on their hosts, restricting service profiles, proactively patching both operating systems and applications, using Layer 7-capable network-scrubbing devices, deploying more comprehensive host protection suites and using less-vulnerable applications. Some of these tactics will require further investment but many simply need organized efforts to better control what you've already purchased. Long-term, enterprise consumers must demand mandatory access control (see "The Promise of MAC,") and better coding standards in mainstream software. Our OSs and applications should protect us from threats, not expose us to them.

Evolving Vectors

Almost everyone is familiar with the phrase low-hanging fruit as it applies to information security; it's typically used to describe the security problems or holes that are the easiest or cheapest to address, yet often yield a relatively high return by lowering an organization's overall risk profile. Another buzzy term making the rounds is blended threat, used to reference attacks that come over a variety of mediums (Web, instant messaging, e-mail, file sharing and so on). Marketing aside, this parlance is a direct result of real-world changes; even basic attacks are appearing in new forms, and the weakest points in our defenses are continuing to shift. But that's only part of the story.

In looking at perimeter trends, it's clear that many organizations have realized the importance of patching and service exposure restrictions. Vulnerability management provider Qualys, for example, says it continues to see a decrease in lead times that pertain to the patching of perimeter-facing systems and related exposures. Gerhard Eschelbeck, CTO and vice president of engineering at Qualys, shared some of its trending data (see "Vulnerability Half-Life," right), which shows that organizations are patching many of their systems in a more organized, timely manner.

Vulnerability Half LifeClick to Enlarge

Veteran penetration testers have similar observations. Neohapsis' penetration testing teams have found that over the past five years, would-be adversaries were required to change their tactics to successfully circumvent modern-day controls. For instance, consultant and research specialist John McDonald's exploitation methods have evolved from the time-tested approach of enumerating and directly attacking exposed services, to a more systems-oriented approach with an expanded notion of target surface. "I'm finding that many network perimeters are less porous than they were even four years ago. Outside of Port 25 (SMTP), Port 53 (DNS), Port 80 (HTTP), Port 443 (HTTPS) and the occasional FTP and IPsec services, your OS and service-level exploitation avenues are increasingly limited."

Neohapsis consultant Justin Schuh, who previously served as a lead researcher for the National Security Agency Red Team, also emphasizes the increased importance of relying on Trojans as a successful exploitation avenue. "Given the much more elevated level of awareness at network borders, attackers are finding that client systems are more viable targets. Once inside, the network becomes a much softer target."

It's good to see that organizations are finally making progress with their perimeter-control efforts, but it's likely that the security challenges will start shifting to where the fruit, indeed, hangs the lowest--the desktop.

There's no shortage of technology to throw at the problem, and there's no shortage of vendors eyeing your money. In fact, there are a staggering number of security product vendors touting their antimalware programs, with more on the way. New companies like Tenebril and Webroot Software provide endpoint protection agents for combating spyware problems. The conventional antivirus vendor list continues to grow, including companies such as Bitdefender, Computer Associates, Eset, F-Secure, Kaspersky Lab, McAfee, Norman, Panda Software, Sophos and Symantec. Network devices from companies like Blue Coat Systems are including Web scrubbing technology in their caching appliances to add additional tiers of defense; firewall vendors such as Fortinet are promoting inline antivirus engines; and network intrusion-prevention products from TippingPoint/3Com and McAfee don't want to be left out. Outsourcing message security providers such as FrontBridge Technologies bundle e-mail scrubbing services to complement their antispam and archiving services. Even Microsoft is getting into the game with the recent release of its antispyware beta and the announced acquisition of antivirus provider Sybari Software.

So, do these approaches work and will they remain unilaterally effective? Should we just throw as much technology at the problem as we can afford? Not exactly. The wise strategist must consider where to invest those limited dollars, and more importantly, how to prioritize his or her efforts. Some of the best strategies have nothing to do with purchasing additional technology and everything to do with leveraging existing investments.

Attacks on the Host

Click to Enlarge

For starters, most experts agree that the entry points for modern-day nasties are primarily through common communication channels: Web, e-mail, IM, file sharing and peer-to-peer traffic (see "Attacks on the Host," right). Smart organizations will look to combat these increasingly advanced threats technically. The first step is to place comprehensive and unified pattern-identification systems on endpoints that can detect known Trojans, viruses and spyware. Most enterprises have some antivirus coverage, but legacy antivirus products have fallen on their face in the spyware category. Fortunately, bundled antispyware functionality from companies like McAfee and Symantec should soon close this coverage gap. Comprehensive protection against this next wave of Web-based baddies, along with coverage for conventional viruses and attack vectors, is a must. Antispyware and antivirus technology should be provided by the same vendor to reduce administrative overhead, but the level of coverage is really the critical point.

Second, an organization should complement its existing antimalware strategies with technology that isn't solely dependent on pattern recognition. According to Travis Witteveen, vice president of North American operations for Finland-based F-Secure, the reasons for the recent opening of its San Jose, Calif., lab were operationally motivated. "We have consistently delivered timely updates to our product, but it has come at a cost; our team in Finland isn't sleeping. This was one of the primary drivers for opening up an additional U.S.-based lab--now both teams can get some sleep once in a while." Facility and R&D investments clearly demonstrate F-Secure's commitment to timely research (which is critical for antivirus effectiveness), but F-Secure's struggles aren't unique--it's a systemic problem throughout the antivirus industry. Trying to stay in step with malware authors is a time and numbers game, neither of which is in the AV vendor's favor.

Fortunately, more progressive vendors are starting to do something about the cat-and-mouse game. For example, McAfee integrated parts of the Entercept HIPS (host intrusion-prevention system) into its 8.0i AV engine. When the Sasser worm hit, organizations that were running this build felt the benefits; 8.0i blocked CAN-2003-0533 (the "LSASS" vulnerability) attacks without needing an updated pattern file. This is a move in the right direction. We must use a combined approach of signatures/pattern recognition with more preventive, nonpattern behavior-blocking mechanisms. Unfortunately, with the exception of Redwood City, Calif., start-up Determina, most HIP technology is still intrusive because of its need for policy adjustments.

Coverage AreasClick to Enlarge

Finally--and arguably the most important technical step--organizations must take a hard look at how they tackle their operating system and application permissions challenges. When users have unnecessarily heightened permissions, they can log in as administrators. If applications are assigned higher permissions, they can run with unrestricted file access and services that aren't always required. This stymies desktop-management teams trying to maintain controlled environments without restricting users and applications, and you can't restrict the effects of malicious code once they hijack those users and applications. Smart IT teams will look to harden both their OSs and applications, and start applying the concepts of "least privilege" to their desktop deployments. It's rare, for example, to find applications like Internet Explorer in a more hardened state. Comparable alternatives, such as the Mozilla Firefox project, actually come out of the box in a less trusting state. But even if Mozilla is running under the context of a user who has full reign on the system, the potential damage is increased when--not if--it gets hijacked. Getting ahead of the problem requires paying attention to the exposures introduced by our network applications, keeping those applications patched, monitoring their use and limiting the potential impact of a rogue application should the network be compromised. Otherwise, you'll be right back in that reactionary stance.

Ideally, we'd all like to run the most effective antivirus, antispyware, desktop firewall, host intrusion-prevention and patch-management packages available--regardless of vendor. But part of that "effective" is "cost effective." Supporting multiple vendors on the desktop is expensive. Let's face it: Touching the desktop at all is expensive. Basic software licensing costs are the tip of the iceberg. Each additional piece of software or agent needs care and feeding. Deployment resources, upgrades, compatibility concerns, troubleshooting time, annual support contracts ... the list goes on. More does not equal better when it comes to the number of agents and specifically, the number of vendors involved with our desktop environment.

The cost of supporting multiple vendors for desktop defense doesn't stop with the endpoint agents. There are further complications on the management and operations sides; each product typically needs a separate control/management mechanism, which often requires at least one central management server. That's another server to maintain, another console to support and another virtual mouth to feed. For example, if an organization wants to select a host intrusion-prevention product from Vendor X, an antivirus product from Vendor Y and a desktop firewall from Vendor Z, it's going to need least three separate management servers and frameworks. The "best of breed" argument begins to collapse on the desktop when you factor in the true TCO (total cost of ownership) of maintaining products from multiple vendors.

Looking ahead, the savings associated with unified suites plays to the advantage of larger providers such as Symantec and McAfee, and will increase in the coming years. However, don't be fooled by the illusion of unification--both Symantec and McAfee continue to acquire technology, but those acquisitions aren't always fully digested when products come to market. For instance, it wasn't until version 8.0i of the McAfee antivirus suite that the company integrated some of the acquired Entercept host intrusion-prevention technology, and neither McAfee nor Symantec had official antispyware offerings until this month--a tardiness that still leaves many of us scratching our heads. These aren't showstoppers, but even the big guys don't have everything humming along just yet. Take a good look at your desktop security providers to see how comprehensive their products really are.

Future TacticsThe agent-on-desktop approach is not the only route to proactive protection. In fact, it shouldn't be. Although we don't recommend skipping the desktop antivirus deployment, many malware outbreaks can be prevented by using wisely placed control mechanisms in critical network choke points. For example, we've experimented with appliances from Blue Coat in the Chicago Neohapsis office to help protect against Web-based threats. The biggest attraction with Blue Coat is that its filtering mechanisms aren't purely pattern-based. The company bases its protection mechanisms on four primary factors: the type of content being pulled down (cab install files versus straight HTML, for example), black- and whitelists, binary signatures (similar to conventional antivirus) and traffic behavior (applications creating back-channel communications, for instance). Again, using mechanisms that aren't entirely tied to the break/fix antivirus pattern-matching world will help address both known and unknown threat types.

Another critical choke point is the SMTP gateway, where many e-mail pathogens enter our environments. Antivirus technology obviously helps here, too, but the time delay in pattern updates usually puts this control in the "containment" category, not the "prevention" one. However, by restricting the types of attachments the e-mail gateway accepts, organizations can pro-actively block more hostile content. The policies and culture of the organization will dictate whether to implement a default deny (whitelist) or default allow (blacklist) approach, but the more restrictive an organization can be, the better the chances of blocking malicious packages.

But here's the rub: One must assume that such controls will get circumvented eventually, particularly as malicious code continues to mature. It will take a combination of technology, process and consumer awareness to make significant strides in reducing the desktop threat, but savvy organizations will be working on all three simultaneously. Simple user awareness will continue to play a huge role in combating these threats, and organizations must push education for its user base. Human cleverness will always lead to user error, but we can hope to reduce the number of these occurrences and limit the damage they cause.

Finally, IT needs a protection plan for the technology areas that have been flying under the radar until very recently. For example, the Bropia worm hit the scene while this article was being written, and cell phone protection is becoming a real concern as these devices start to look suspiciously like PCs. Ultimately, we must continue to be smart about our defenses while simultaneously sending a message to our suppliers: We want products that protect us out of the gate. The Band-Aid approach just ain't cutting it.

Greg Shipley is the CTO for Chicago-based security consultancy Neohapsis. Write to him at [email protected].Gone are the days when a virus was your worst problem. Now there's spyware, malware, phishing and a legion of vicious attackers on the horizon. When users innocently open e-mail and instant messages, they leave the doors wide open to the bad guys lurking on the fringe, trying to figure out how to get into your network. In this Affordable IT installment, Greg Shipley suggests some often overlooked weaknesses and inexpensive ways to protect yourself. We show you how to leverage your existing technologies rather than chase after the latest, greatest panacea, and discuss the security pitfalls of using equipment from multiple vendors. Your network is only as safe as the nearest desktop computer, but with some care, you can fight back without watching your budget go up in smoke.

You can find all our Affordable IT articles here.

The concept of MAC (mandatory access control) is not new, but adoption outside of government and research circles is rare. Although the definition of MAC varies depending on who you ask, it's basically an access-control philosophy that defines granular and necessary methods of how objects within an operating system are granted permissions. MAC is often referred to as the opposite of DAC (discretionary access control), the method most mainstream computing environments are built on today.

Imagine, for example, that desktop applications must be granted specific rights in the operating system--based partially on user credentials--before they can access system and network resources. If your OS only allowed your Web browser to write to a handful of directories, speak only on a few ports and restricted its access at a kernel level to a defined number of functions, the hostile code threat probably wouldn't be so problematic. MAC could help realize that dream.

However, MAC comes at a price: It adds complexity, and, as with any technology, a successful adoption of MAC-centric deployments would require proper configuration, maintenance and management backing, none of which are trivial. For MAC to work, organizations must create operating templates for any supported application, which could prove difficult in environments with certain cultural ideals and resource constraints. Take, for example, the philosophical approach to blocking e-mail attachments: Some organizations opt for an "implicit deny" (default block/deny everything, specify what to allow) SMTP gateway attachment policy, while others take an explicit deny (allow by default, specify what to block/deny) position. Implicitly denying everything and only granting access to known acceptable items is the preferred (and safer) method of access control, but many IT staffs struggle to make this an accepted philosophy in their corporate culture. Security purists would be quick to point out that most of IT has conventionally gone the "explicit deny" route in policy decisions, which has lead us to a reactive position of chasing things that are later discovered as "bad."Today, you'll find pieces of MAC implementations in TrustedBSD and the 2.6 Linux kernel, and you can get working deployments off the ground with packages like SELinux and Immunix. Although one could argue that early iterations of third-party host intrusion-prevention software took a stab at bringing MAC concepts to the Windows platform, the Microsoft kernels are devoid of MAC-friendly "hooks."

With the growing demand to curb the onslaught of vulnerabilities, perhaps the concept of least-necessary privileges will gain popularity. Until it does, we'll just continue our reactive, but ineffective, "chasing bad things."

One of the most inexpensive ways to safeguard your systems is to keep your patching efforts up to date. Most organizations understand the urgency of deploying critical patches to their operating systems and infrastructure, but as the cross-hairs start to move away from exploiting OS holes towards exploiting application holes, our efforts must move with them.

Look back at the past 12 months of advisories, and you'll find exploitation methods against Web browsers, e-mail clients, utilities, MP3 players and components, including the Adobe Acrobat viewer. Even the antivirus engines run the risk of coming under siege: Recent advisories detail vulnerabilities in antivirus engines from F-Secure, Symantec and Trend Micro. One would hope that the quality-assurance process around security products would be a bit tighter to avoid such problems, but the need for patching your security products against holes just reinforces the point. Smart organizations will ensure their application-inventory systems are robust and current, and take steps to have the processes and technology in place to deploy updates whenever necessary.

Read more about:

2005
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights