Survivor's Guide to 2006: Security

As you prepare for 2006, you need compliance-driven products to ensure your company doesn't become the next security-breach headline. But don't be fooled by all the vendor hype.

Mike Fratto

December 16, 2005

13 Min Read
NetworkComputing logo in a gray background | NetworkComputing

The Case for Compliance

Getting funding based on fear is not a viable long-term option. About 40 to 50 organizations had public exposures in 2005, but that's a small minority of all U.S.-based companies. It's reasonable for business managers to rationalize that those companies had other problems, and such breaches won't happen to their organizations. Security administrators must push the business value of security purchases. Granted, articulating the business benefit of an IPS (intrusion-prevention system) is difficult, but regulatory compliance can be your friend in both getting funding for new projects and getting required security features into other IT projects. Pick your industry, and chances are a law like HIPAA (Health Insurance Portability and Accountability Act), Sarbanes-Oxley, GLBA (Gramm-Leach-Bliley Act) or FISMA (Federal Information Security Management Act) applies. Failure to comply can mean big fines.

But don't beat that drum too hard. The fines levied for noncompliance may be a pittance compared with the cost of purchasing and deploying products. The fine for unknowingly violating a HIPAA regulation, for example, is capped at $25,000 per incident.

However, the compliance angle combined with other motivators, such as improved processes, better protection and reduced risk of attack, can make a compelling argument, and if your organization is ever dragged into court, proving that it complies with regulations and best practices shows due care.

On the flip side, security product vendors are all waving the compliance flag trying to get your attention and your dollars. But which technologies satisfy which regulations? HIPAA and other statutes mandate and recommend some technologies' features, but technologies change. The interpretation of the law won't be settled until the courts settle cases. Work with your legal counsel when addressing compliance issues.

And don't think of compliance strictly in terms of products. Think of it as a multistep process that starts with stating what controls are needed to achieve compliance with a law or regulation, documenting how those controls will be implemented, providing the controls and proving that your controls are properly set up. HIPAA 5 CFR 164.312(d) requires that users accessing personal health information (PHI) be authenticated. So if your organization must comply with that act, you need a control, such as a user name/password, token or biometric authentication system. You also need a security policy statement that defines your authentication policy. And you must document the processes to ensure all users have a password.

In short, you may have to prove that your organization does what you say it does during an audit, and that can be a costly endeavor; in the case of authentication, you may need to audit multiple user repositories. Compliance stretches across many deployed applications and products within your organization. Those products must comply with your requirements, and you must provide proof that they are in compliance. Take the opportunity while initiating new projects or retooling existing processes to build in security initiatives.

When a vendor tells you its product helps with compliance, your first question should be, "Which laws and regulations, specifically?" If the vendor can tell you that, determine whether you must comply with that requirement. HIPAA 45 CFR 164.312(a) (2)(iii) Automatic Logoff, for example, is an addressable technical safeguard, which means it should be implemented, but if it isn't addressed, you must explain why.



There are multiple points of access to IT resources, but in all cases, access starts with a user at a workstation or with a handheld device. Problems at a workstation run the gamut--from users having too much access, (laptop users often have local administrator rights) to their unwitting installation of trojans, worms and other malware or changing configurations. Another point of access might be caused by the improper update of security software. It's safe to say, with 20/20 hindsight, that the conventional network perimeter model--with security measures such as firewalls, intrusion-detection systems and antivirus software focused at the edge of the network--is flawed. Not only does the clich?? of the "crunchy outside, soft chewy inside" apply, but users who roam in and out of the network change the boundaries just by being inside or outside. Even in a network where no computers, including PDAs, leave the building, the perimeter model shows its Achilles' heel when a worm rampages unchecked internally.

The ideal answer to worm outbreaks is to patch systems as soon as patches are released because the time from patch notification to a worm release is now just a few days. A multipronged approach of egress filtering, early warning of worm activity using network anomaly detection and tight controls on remote mobile computers can reduce the impact of a fast-moving worm.

Is network-access control--the marriage of knowledge about the host and the user with the application of policy granting network access--the answer? We can't say yet. The products are too immature and the infrastructure vendor initiatives are missing parts, but the principles behind network-access control are sound.

Authentication, authorization and auditing (AAA) have long been a cornerstone of computer security. The authentication part--ensuring a user is who he or she say he or she is--is well instrumented in IT environments. The auditing part, in general, is weak, but that is largely because of missing features in IT systems (an event log is not an audit log). The authorization part is the critical missing piece.

Some visionary vendors including InfoExpress, Sygate and Zone Labs have provided network-access solutions for some time, but the models generally require another perimeter device allowing access to the network. That means one more product to purchase, manage and maintain. Still, you end up with more, albeit smaller, perimeters. But as Greg Shipley points out in "But Will It Work?", conventional infrastructure vendors like Cisco Systems, Enterasys Networks and Juniper Networks are coming to the table with workable network-node validation products in 2006.

The infrastructure model requires vendors to integrate products, and like most projects, networking these will generally be one-off relationships with one vendor or the other dictating APIs and components. That word open gets tossed around a lot--as in Cisco will make its standards "open" and will submit them to standards' bodies. However, standards' bodies like the IEEE, IETF and ISO don't rubber-stamp requests by vendors to publish standards. So, for the IETF, we can expect the Cisco vision will most likely be introduced as informational RFC, rather than standards track documents.

With such a variety of deployment options, you won't have to completely rearchitect your network. If you simply want to ensure that only company-issued equipment connects to the network, for example, go to 802.1x--already used in WLAN deployments to enforce authentication. If you want more control over mobile computers, targeting deployments of standalone systems is a good start. Be sure you can expand.

Meanwhile, Trusted Computing Group's Trusted Network Connect (TNC) working group is also marching out standards documents that integrate parts, client components, the authorization server, associated back-end systems and the enforcement points.

Although TNC is covering similar ground and has quite a few vendors, Cisco isn't part of it, and that leaves a huge hole.

Yet, what you want are products that work together effectively and efficiently. That means you need to ensure vendors demonstrate interoperability in pilot projects and initial deployments. About 80 percent coverage isn't good enough.

Information Is Still King

Pure play IPS is disappearing and is becoming a part of the network firewall. It makes sense. IPS systems are being deployed on a regular basis initially in an IDS configuration, and then prevention services are slowly being enabled. The fundamental problem with intrusion detection is false positives--legitimate traffic causing alerts. The volume of IDS events can overwhelm security administrators. Sure, tuning, disabling or enabling IDS signatures for a number of hosts can reduce false positives, but it takes an expert to really tune an IDS. More important, if you're preventing traffic flows, false positives become a denial of service.

IDS/IPS vendors have tried to solve the false-positive problems by improving the detection side of IPS. Signature detection can be taken only so far. Eventually, an IDP can tell you only if an attack was attempted, not if it could be or was successful. You want to know whether you have to worry about an attack--not that it happened. Check your Web server logs and it will be chock-full of old directory traversal and IIS attacks.


Anomaly detection--the detection of traffic based on abnormal behavior--while effective for certain types of security problems like recon, worm and policy violation detection, can't alert you to targeted successful attacks.

The evolutionary step is toward combining signature IDS with active and passive vulnerability analysis, and adding anomaly detection. The goal is to provide a context for determining how critical a detected attack attempt may be. Products from Sourcefire Network Security (now Check Point), Tenable Network Security and NFR Security that use multiple detection capabilities do reduce the number of false alerts, but they still can't express the correlation required to process a request like "if there is an attack against a target system and that target system displays abnormal behavior after the detected attack, then raise the critical level and send alerts." That verges into the realm of SIM (security information management).

If you have multiple data feeds, a couple hundred grand and several months to a year, SIM tools help aggregate and process lots of data in near real time and search for predefined complex conditions that can indicate an attack is under way. SIMs try to algorithmically mimic the mental processing of a security administrator and float only the interesting events to the surface, reducing overall workload.

But at more than $100,000 is a lot of cheese just to get a SIM setup in the door. SIM-like capabilities, such as those found in IDSs and vulnerability management systems, to smaller-scale SIMs targeted at a smaller chunk of the security events, may be the answer. The goal is to turn events into actual data.

It's the Application, Stupid

Exploits against Web and server applications are on the rise probably because the low-hanging fruit of server-based vulnerabilities have been picked, or because vulnerability researchers and vendors are working more closely together. Whatever the reason, service-level vulnerabilities are more scarce. The activity seems to be in application-level exploits. Poorly coded Web applications using PHP, Perl, ASP, JSP or some other application framework provide the data path from the external user to an internal database. That path exists and presents an opportunity for an attacker to gain more access than desired.

It's easy to tell developers to "write more secure code," but unless you're a developer, it's difficult to determine if they really are. If you're buying a commercially developed application, you probably can't make such demands. Ideally, security problems are handled in the application, but as a fallback, Web application firewalls are poised to police Web traffic. The first interactions of Web application firewalls were little more than HTTP application proxies with HTML parsing engines. Although they could block many attacks, it was difficult to learn how to use them and how to tune them, and they impeded traffic. Those first-generation Web application firewalls also couldn't handle Web services.

However, Web application firewalls from vendors like F5 Networks, Imperva and Radware have largely overcome the performance problems and can provide a reasonable solution to protecting from application-level attacks. Web services gateways positioned in front of Web services hosts or the data center edge can thwart a range of attacks such a XML DoS, data manipulation attacks,

CVE:

Each year CVE gains more vendors using CVE information in its products. Recently, CVE did away with the CAN/CVE numbering system and all CANs will be converted to CVEs. That is a good move and certainly clears up the naming system.

OVAL: Open Vulnerability and Assessment Language attempts to standardize the detection and reporting of vulnerabilities and system configurations. This is a leap beyond Mitre's other initiative CVE, and one that hopefully will reduce ambiguity between vulnerability assessment tools.

CVSS: Common Vulnerability Scoring System by the Forum of Incident Response Teams, which attempts to apply a common scoring to vulnerabilities and environments. Similar to OVAL, adherence to this standard should reduce confusion.

TCG: The Trusted Computing Group working groups are working on multiple fronts to standardize interfaces and messaging for attesting to the condition of a computer, authenticating a user and a computer to other systems, and relating condition and posture to the network for network access control. They are competing against vendor initiatives from the likes of Cisco and Juniper.

Mike Fratto is editor of Secure Enterprise. He was previously a senior technology editor for Network Computing and an independent consultant in central New York. Write to him at
[email protected]>.

Last year, we forecast several developments in the wireless arena. Some of our predictions have only partially come true. For instance, we said IPS is going to be useable in 2005. Actually, IPS is becoming a feature that is being deployed in limited ways, but it still does not alleviate the need to tune the signatures.

It's a good time to pilot projects using NAP (network access protection) and NAC (network access control). Unfortunately, these products aren't even shipping yet. Cisco Systems recently announced support on its switches. Juniper Networks is pounding the pavement with its InfranetSuite. Good times.

SSO (single sign-on) isn't going to happen in 2005. And it probably won't happen in 2006. But that's because SSO is just one part of identity management, which is being built and sold to IT when it should be sold to HR and accounting. Identity management is a business process that requires the complete re-engineering of the internal business process. For SSO to work, the identity stores must be cleaned and organized, a strict naming schema must be developed and enforced. Without those critical pieces, getting SSO to work across platforms is a manual job of relating accounts with different naming conventions.

Read more about:

2005

About the Author

Mike Fratto

Former Network Computing Editor

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights