Secure to the Core

Most organizations don't need more expensive security controls, just more effective ones.

January 20, 2003

13 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Enemies Inside the Gates

Cautionary tales of Internet hackers extraordinaire and other dangers lurking in the Web forest have led us down the path of constructing steel doors in open fields. The emphasis has been on the doors, rather than on what they are protecting. Truth No. 2: We must become less perimeter-centric and more asset-centric, because the reality is we can't protect it all.

Security Program Structureclick to enlarge

Without a firm grasp of what we're guarding, where it resides and how valuable it is, how can we hope to quantify necessary levels of protection, much less achieve them? Without open lines of communication between IT and business units, how can security teams quantify the true threat to digital assets?

Unfortunately, when it comes to assets, the problem lies with the business and security teams; most business operators know little about infosec, and infosec practitioners know little about the business. Without a better understanding, a common ground will not be found.

In a cost-conscious economy, organizations don't need more expensive security controls, they need more effective ones. It's time to regroup, re-evaluate, and make 2003 the year holistic strategies take center stage.Infosec TriageAs any data defender in a large enterprise will tell you, it's a lot easier to attack than it is to defend. Intruders need find only one chink in the armor, while protectors need to outfit all their assets with armor while battling restrictive budgets, limited resources, nebulous perimeters, open systems and an onslaught of ongoing technical vulnerabilities. Hence the continued emphasis on the "defense in depth" concept: creating multiple defense tiers in the hopes that, should one fail, another will provide the necessary protection. But what should we be defending? Servers? Networking equipment? Desktops? Files? Backup tapes? Applications? Databases?

Most security personnel will say: "All of the above," and while that answer isn't necessarily wrong, there's a greater chance of achieving world peace. Remember: You can't protect it all. While no one likes picking sacrificial lambs, infosec triage is a necessity. Protecting what is most important is the best you're going to do, because cold hard truth No. 3 is that bulletproof security does not exist. The basis of the triage process is distinguishing what is more valuable from what is less valuable, taking into account the heart of information technology: information. Propagating, distributing and using information are what drive the need for desktops, servers, software and networking gear. And, within most organizations, the value of information varies based on business importance and sensitivity. This is no surprise. However, just what value should be assigned to each piece of data is not always clear to IT and security personnel. In addition, some types of information have proven more prone to attack then others.

For example, if we examine loss statistics, a story unfolds: Certain types of data are more tempting targets, and the losses associated with these targets are substantial. The 2002 CSI/FBI Computer Crime and Security Survey of 503 computer security practitioners makes clear that while abuse of Internet access and virus outbreaks are the most common incidents with financial ramifications, theft of proprietary information is by far the most expensive. The survey also notes that proprietary-information theft can come from both internal and external intruders (see report highlights).

The top items stolen include financial statistics, research and development data, strategic plans and customer lists, according to the results of a survey of 138 companies, including both Fortune 1000 and small and midsize businesses, conducted by ASIS and PricewaterhouseCoopers ("Trends in Proprietary Information Loss," at www. asisonline.org/pdf/spi2.pdf). Rather then debate the location of the attacker, why not consider the location of the target? Circling back to the concept of triage, forward-thinking security teams are combating the problem by working with the business side to identify key targets, then creating identification and classification systems. Once they know which assets are most important, prioritization efforts can follow. Classification strategies typically start with data-grouping efforts, which can be rolled into more complex asset-classification systems when combined with variables such as system type, function or criticality.

Starting with data classification, these frameworks can be as simple as two- or three-tier systems or as complex as variable asset-value models (see "Companies Struggle With Data Classification").A basic data-classification plan may start with the data and provide a framework for grouping that data into two or more classification tiers. A three-tier method may include categories such as public data, private data, and proprietary and confidential data.

For example, schematics for the next-generation Strong-Bad 3000 cannery machine--which is capable of packaging potted meat at the rate of 3,000 CPMs (cans per minute) and could revolutionize the potted-meat industry--would be considered sensitive and valuable by the machine's maker. In our three-tier model, the data relating to these schematics would be classified as proprietary and confidential. In contrast, last year's sales brochures touting the aging Strong-Bad 325i models, available via the company Web site, would be classified as public data.

While this example is simplistic, the success of a classification effort is often determined by its simplicity. A four-tier model might introduce a tier between private and proprietary--after all, the more tiers, the more granular the organization's data-classification efforts can be. However, with that granularity comes added complexity, larger margin for error, and potentially higher costs associated with making the classification process a reality.

Let's move from data classification to asset classification. In this case, an asset might be a piece of data, a single system or a group of systems that perform a given business function. For example, all the data, servers and applications that comprise the payroll system might be viewed as a single asset (with multiple components). Or, depending on the classification policies, components might be rated/ranked differently. Asset rankings might also take into account less tangible factors, such as "visibility." For example, a public Web server may not contain critical data, but a defacement of the site could result in public embarrassment and a decrease in customer confidence. Regardless, how a given organization views its digital assets depends on defined policies and strategies and the organization's ability to execute on those strategies.

Unfortunately, many organizations complete their classification policies but fall flat on their faces when it comes to completing the classification process. According to both our own observations and Network Computing reader polls, most organizations have not even completed their data-classification efforts, much less mapped those classifications to IT assets, essentially removing the possibility of an "effortless" move to a practical asset-based risk ranking system.If you're in this boat, don't jump overboard. Often, existing tools found within the organization can help. For example, while many infosec programs are in their infancies, many disaster-recovery efforts are mature. Asking the disaster-recovery folks what they discovered during their business impact analysis studies can often provide security personnel with a much needed jump-start in identifying critical assets at a high level.

Again, business participation is critical, because neither IT nor security can be expected to understand all of an organization's dynamics. Finally, consider using third-party resources to help in the classification process, particularly if your organization is short staffed or there are concerns about business units objectively performing the task without aid or supervision. If we were to apply the average infosec strategy to the world of physical security found at, say, a bank, we would wind up with a large building equipped with titanium reinforced doors. However, those doors would remain ajar, and burglar alarms would squawk at every tenth customer. Inside would be tables piled high with cash, appropriately marked "please do not touch." Finally, the lights would be off most of the time to ensure that security guards remained only moderately effective at protecting the piles.

This scenario sounds absurd, but the harsh reality is that the digital world doesn't stray far from this model. Most security efforts are perimeter-centric, lack robust internal controls and are not monitored sufficiently. But just as bank security has evolved to include controls on both the perimeters (using strong doors and walls), and internally (safes), shouldn't other organizations protect their digital assets similarly?

While most organizations do employ some internal controls, such as authentication mechanisms, file-access-control lists and the occasional network-segregation effort, the effectiveness of these controls is often lacking. Traditional internal controls are becoming less effective; modern-day attack methods usually exploit some type of application or OS flaw--flaws that let intruders bypass other protection mechanisms undetected.

For example, a basic Sun Solaris system may use proper file-level access controls in addition to strong authentication mechanisms, but if further precautions have not been taken, last week's RPC (remote procedure call) service vulnerability will let a remote attacker walk onto the machine as root, essentially turning over the keys to that machine's kingdom (and data).Another common failure is deploying what might be perceived as a defense-in-depth implementation when, in reality, the deployment still possesses a single point of security failure. Take many Web-based e-commerce applications, for example: While a given deployment may involve firewalls and intrusion-detection systems, if the application requires a single user name and password combination to access critical data, does the strategy truly possess any depth? How many effective controls sit between an intruder and critical data sets?

Traditional perimeter-centric and attacker-centric protection models face future problems as well. Still in the making is one of the biggest Challenges: Web-services. As companies collaborate, and internal systems engage in higher levels of interoperability with foreign systems, one organization's lax attitude is another's security nightmare. The ever-evolving perimeter, combined with components, subroutines and data exchanges that organizations no longer control will bring new meaning to the phrase "target-rich environment."

Other people's problems invading your computing environment won't be the exception, it will be the norm. Technologies such as SOAP and XML-RPC promote asset-centric data sharing, rendering most perimeter controls useless. Perimeter- and attack-centric models won't help here: Organizations must move to more asset-centric controls or face increased risk and exposure.

Many organizations are seeing the first wave of these threats, albeit as scaled-down versions, in their extranets. For example, the outbreak of automated worms such as Nimbda left many companies in the precarious position of having third-party systems attacking their own internal machines. The problem resulted from Microsoft IIS-based systems that were owned and operated by third parties, resided on local networks and were used by local users but hadn't kept up with the latest patches. The result: An outsider's negligence caused damage to internal resources--resources that did not fall under the protection of perimeter controls. Further network segmentation, and more tiers of defenses, would have helped prevent these situations.

Looking AheadThe next step is a big one one for most security staffs, and ingrained legacy security models can present large obstacles. Many of today's infosec strategies are rooted in concepts developed decades ago, and while these concepts still apply to components of a successful program, they do not provide the framework for a holistic security model. They certainly don't incorporate the triage concept.

So should corporations stop purchasing firewalls? Should they move users into their DMZs and ditch their network IDSs?

Certainly not. However, they should move many of the tools and techniques used at the perimeter closer to critical assets. Organizations would be wise to invest some energy in first determining what they are protecting, then analyzing how best to protect it.

Greg shipley is the CTO for Chicago-based security consultancy Neohapsis. Write to him at [email protected]. Think publicly reported monetary costs of intrusions are chilling? Consider this: According to 8,100 global technology and security professionals polled by InformationWeek, only 18 percent report incidents to CERT or government authorities, and only 14 percent keep business partners in the loop.

If you have not yet crafted an asset-centric, defense-in-depth strategy, this is your wake-up call. Organizations don't need more expensive security controls, they need more effective ones, and there are a few points that can help the process: A holistic approach that balances policy, process and technology is paramount. We must become less perimeter-centric and more asset-centric, because the reality is, we can't protect it all. Bulletproof security does not exist.Forward-thinking security teams are aligning themselves with the business side of their organizations to create asset-classification systems. These systems can help security teams choose the battles to fight and prioritize deployment efforts. If you use our guide and work smart by putting fundamentals like vulnerability management and intelligent firewalling in place before branching out to niceties such as intrusion detection, 2003 could be the year that the good guys start gaining some ground.

While much of the industry anxiously anticipates the impact of legislation such as HIPAA, GLBA, the Patriot Act and the upcoming Homeland Defense initiatives, some of the legal cases that caught our eye in 2002--cases that weren't based on these much anticipated regulations--will set the stage for upcoming litigation in 2003:

• Ziff Davis Media and the New York State Attorney General: Late in 2001 subscriber information (including credit card numbers) was lifted from one of Ziff Davis' magazine promotion sites. NYS AG Eliot Spitzer's office took notice of the data theft and found ZD's privacy policy and ZD's interpretation of "reasonable security controls" inadequate. ZD and Spitzer came to an agreement in August 2002 that resulted in $100,000 in state fines, $500 per credit card lost (payable to the victims), and a detailed agreement outlining security control requirements (see www.oag.state.ny.us/press/2002/aug/aug28a_02.html).

• Eli Lilly and the infamous Prozac e-mail: On July 25, 2002, NYS AG Spitzer announced a multistate agreement with Eli Lilly for an incident in 2001 wherein the pharmaceutical manufacturer inadvertently revealed approximately 670 Prozac subscribers' e-mail addresses. The agreement outlined security measures that Eli Lilly must take, along with $160,000 in fines. The mistake was reportedly caused by an e-mail program that failed to place e-mail recipient names in the bcc: field, as opposed to the cc: field (see www.oag.state.ny.us/press/2002/jul/jul25c_02.html).

• The SEC and e-mail preservation: On December 3, 2002, the SEC fined five firms--Deutsche Bank Securities, Goldman Sachs, Morgan Stanley, Salomon Smith Barney and U.S. Bancorp Piper Jaffray--$8.25 million ($1.65 million each, not counting legal fees and bad PR) for violating record-keeping requirements in regard to preserving e-mail communications (see www.sec.gov/news/press/2002-173.htm).• Identity theft goes prime-cyber-time: On May 12, 2002, news broke that 15,000 consumer credit records that had been lifted from Experian's systems, ostensibly by Ford Motor Credit Co., and sold for $60 each. The records were used for identify-theft purposes. It was later reported that, of those 15,000 accounts, only 400 were Ford customers, making a further case for tighter controls in regard to third-party access restrictions to confidential data. This is just one of hundreds of such cases the FBI is still investigating (see CNN.com for more).

Bottom line: The legal momentum is clearly building, and regulatory actions will only turn up the heat. If organizations continue to operate with negligent controls and damages occur, lawsuits, tangible dollar losses, negative publicity and reduced customer confidence will certainly result. The stakes will rise this year.


Read more about:

2003
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights