Time to Reconsider the Data Center

As enterprises push out data centers into cloud-based computing and virtual applications, traditional data center planning and practices haven't necessarily kept pace. Is it time to reshape definitions of classic brick-and-mortar data centers into a new computing concept with different performance and total-cost-of-ownership expectations? If nothing else, the business cases now driving data center services are beginning to demand it. "As a global organization, we know that we must not only provi

January 5, 2012

7 Min Read
NetworkComputing logo in a gray background | NetworkComputing

As enterprises push out data centers into cloud-based computing and virtual applications, traditional data center planning and practices haven't necessarily kept pace. Is it time to reshape definitions of classic brick-and-mortar data centers into a new computing concept with different performance and total cost of ownership expectations? If nothing else, the business cases now driving data center services are beginning to demand it. "As a global organization, we know that we must not only provide 24/7 IT, but also enterprise-strength IT support on a follow-the-sun basis," says John Heller, CIO of Caterpillar.

Demands for both 24/7 computing availability and also IT "A team" availability come at a time when 65% of CIOs are using or planning to use cloud in their data center strategies. However, 55% are still uncommitted to IT asset management beyond physical data centers, as revealed in an IBM survey of IT executives that was recently shared in a briefing with industry analysts. The primary reason for trepidation is concern about security.

Public cloud providers haven't done much to change this conception. This past May, many of Google's services, such as Gmail, Search, Maps, Analytics and YouTube, suffered an outage, leading IT executives to wonder what would have happened if another cloud service, like Google Apps, had experienced a similar outage. This kind of downtime is not acceptable for any enterprise application--even a non-mission-critical one. In August of 2011, lightening knocked out power sources in Europe and caused downtime for many Amazon customers using cloud services such as the Amazon Elastic Compute Cloud (EC2). This was compounded a day later when a problem in a clean-up process within Amazon's Elastic Block Store (EBS) service deleted some customer data.

Situations like these do not inspire confidence in CIOs when it comes to entrusting enterprise services to the cloud, and they are one of several reasons why those enterprises deploying cloud are beginning the journey with private clouds, with a game plan that allows for expansion into a hybrid (private-public) cloud concept as cloud technologies and practices become more mature.

Regardless of the evolutionary path cloud must take, it has already impacted traditional data center thinking to the point where most CIOs and data center managers understand that the data center must be reshaped as IT moves forward.

Here are the several key data center challenges facing CIOs as we move into 2012:

Conversion to a service vulture
Cloud computing and on-demand resource provisioning is propelling IT into a service center with measurable SLAs (service level agreements) that evaluate IT performance based on responsiveness to user help calls as well as on mean time to repair (MTTR) in situations of problem resolution. Vendors are working overtime to ensure that the tools for measuring performance and taking corrective action in a service-oriented environment are there. However, the more troubling aspect of this for IT decision makers is how to move their staffs forward.

While IT has pretty much shed its 1980s-vintage reputation of being a glass house, it is still a control-oriented discipline that operates in a world of things, not people. Developing people skills so you can work with your end users as if they were outside customers (that is, you do not take them for granted) and working together with other disciplines in the IT organization that have formerly been siloed is not accomplished overnight.

In the new service-oriented data center, it will be incumbent on IT to deliver premium service to expectant end users. This service comes not only in the form of better uptime and faster processing and response/repair times. It also demands excellent communications skills and the ability to follow up with end users on outstanding issues before they finally have to pick up the phone and call you because they haven't heard from you.

End-to-end application management
The emphasis on service management means that IT must have end-to-end visibility of applications and workload performance if performance problems are going to be detected and resolved. Applications and workloads now routinely cross multiple platforms and operating environments in both traditional and cloud-based environments, which requires system management software that is able to track an application at every juncture of performance. This presents a challenge if different IT professionals (for example, DBAs, network administrators and system programmers) use different tool sets for troubleshooting. These different sets of tools tend to present application data differently, so there is no unified view of the data. The consequence can be staff finger-pointing and deadlock on why a given application isn't performing well. Meanwhile, the business waits for the problem to be solved.Managing virtual sprawl
After 10 steady years of virtualization, virtual server sprawl is starting to impact data centers. This comes at a time when data center managers are still celebrating the reductions in floor space, licensing fees and energy use that were attained as physical servers gave way to their virtual counterparts. Indeed, one of the issues that has been creating virtual server sprawl has been the perception that data center resource utilization has already been attained with the virtualization of physical servers. This can give sites a false impression that any resources they choose to deploy virtually are free.

This feeling has spurred aggressive virtual server and systems deployments. Consequently, companies are now seeing their virtual system and server costs rise. These growing costs are beginning to erode the cost and the efficiency gains that were originally attained with virtualization and to add to TCO (total cost of ownership), but you can control those costs by by managing your virtual servers.

The data explosion
Data (especially unstructured data) is growing exponentially in enterprises. The question in the data center now is how to best manage it. Since data growth and access are primary focal points, the key for many enterprises is ensuring that the data most often used is the most easily accessible and also to ensure that data not being used is either archived or deleted. One key political issue that many IT organizations have yet to broach with their end users is the setup of archiving or data deletion criteria for data that is seldom or never used. Given the avalanche of new regulatory requirements for data maintenance that impact industries like finance, insurance, utilities and healthcare, being able to say "no" to data that is seldom or never accessed is becoming harder to do.

Vendor management and legal skills
As more outside cloud and software services are added, the IT focus must be on vendor management and contract negotiation issues. Unfortunately, most IT professionals don't have strength in these skills--and most schools (with the exception of law schools) don't teach it. Vendors can make or break IT success in the data center. This makes vendor management a must-have"skill in the data center. CIOs and data center managers have to either find or develop this skill in IT, because corporate lawyers with no idea about how IT works aren't a turnkey solution, either.

Workload management (analytics servers)
Business analytics will be a driving IT force in 2012, but data centers are far from incorporating analytics into their workloads. The classical approach in the data center has been to prioritize traffic and resources for online transaction processing during business hours and to relegate analytics reports to batch processing or offline data warehouse processing that work on lesser priorities and are frequently processed overnight. This has to change if enterprises are going to fully take advantage of real-time analytics that they are demanding.

Transformative business cases will drive data center revisions
When the volcano erupted in Iceland in spring 2011, one Fortune 100 global manufacturer never lost a step in the fulfillment of its orders. Understanding that its supply chain was disrupted when planes in Europe were grounded, the manufacturer quickly recalibrated its supply chain (by using business analytics and a cloud-based supplier network) to reroute sourcing. The combination of cloud-based IT, real-time business analytics and integrated supplier collaboration produced the rapid response that enabled this to happen. The business and the customers benefited.

Every enterprise has its own business cases that ultimately drive IT investment--and they are going to continue to push data centers to reach beyond their brick-and-mortar fortresses and into the global network where businesses meet their customers and their business partners. It is not too late to start reshaping the data center so it can be up to meeting these new challenges head-on--and it is not too early, either.

Check out our new research report IT Pro Ranking: Data Center Networking (free, registration required).

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights