The (Virtual) Show Must Go On

For movie-industry supplier Deluxe, a managed, virtual-computing utility brings greater scalability and reliability -- without the need for more staff or a bigger data center.

February 4, 2005

9 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Sometimes the best way to get a job done is to give it to someone else. At Deluxe Laboratories, North America, we found that outsourcing certain IT systems to a virtualized, managed facility is a powerful, effective, and money-saving solution.

Deluxe is a company you see listed in the credits of many feature films. Our company, a subsidiary of Rank Group PLC in the United Kingdom, offers a broad range of entertainment-industry services and technologies to our international clients, which include MGM, Miramax, New Line Cinema, Paramount, and Sony. Deluxe's film business had first-half 2004 revenue of roughly $336 million, and its media services group had midterm revenue of roughly $314.8 million.

While Deluxe's main business is worldwide movie distribution and fulfillment services, our services include physical- and digital-asset management, content repurposing and conversion, digital production, film-laboratory services, and release printing. We have offices in seven European countries, Canada, and the United States.

Like many fast-growing companies, Deluxe recently made several strategic acquisitions. These allow us to serve our customers at multiple points in the supply chain. In 2004, we acquired Deluxe Digital Media Management from Deluxe Media Services. This business, located in Valencia, Calif., is in the promotional end of the movie business.

Though Deluxe tends to focus on lines of business, we view IT as spanning all functions. So, while the newly acquired company's operations are managed by our Deluxe Digital Media Management group, its IT systems are the responsibility of the corporate IT department, which I lead.

The core applications we inherited with this division are built around inventory control. Deluxe owns and operates a 250,000-square-foot warehouse. When a movie-studio customer sends promotional merchandise, such as posters or standing displays, to our warehouse, we receive and store the goods. Then, when the customer is ready to ship those items to movie theaters, we pick, pack, and ship—and get the tracking numbers. The customer must be able to enter orders and track everything via a Web-based interface to ensure that the delivery was made on time and with the correct mix of merchandise.

The application may sound simple, but some of the product combinations are extremely complex and customized for each theater. In addition, we must complete this task quickly. When customers say they want something, they mean yesterday!

In acquiring this division, we inherited an IT environment that required rethinking because it didn't meet our technical and architectural standards. Deluxe Digital Media Management operated two hosting sites—one managed by a third party and the other not—and the management was somewhat questionable. For example, one day, they decided to patch an upgrade to the servers at 2 p.m., but forgot that we had customers on the system at that time.

For this and other reasons, we concluded that the fulfillment-business system technology—low-end servers that weren't scalable without replacement—would need a major refresh. When we estimated the cost, however, we found that the capital requirements were huge, and we'd also need to significantly expand our head count to support it.

While controlling costs is important to nearly every industry, it's especially important in ours, where vendors like Deluxe get squeezed. But offering a low-cost service isn't enough. We must also provide high quality.

Another concern was scalability. One advantage of being a large company is that our customers are comfortable signing contracts with us that span many segments of the supply chain. So, at the end of our evaluation in the first quarter of 2004, we realized we needed a new IT environment for the fulfillment business. It would need to ramp up—or down, if necessary—quickly, fairly easily, and without extra costs in the form of additional staff or a larger data center. The solution, we quickly decided, lay outside the company.

Our next step was to solicit bids from vendors. One of these, Savvis Communications, was already supplying a private data-IP network to Deluxe Laboratories. As it turned out, Savvis came back with a winning bid for a virtual-computing environment.

IBM and others offered a "computing on demand" model that essentially lets customers rent applications on an as-used basis. For customers, this solution saves both money and time, in large part by freeing them from operating extra capacity—in both processors and storage—as a stopgap against periods of high traffic. But research shows that, on average, mainframe servers have a utilization rate of only 53%. For Unix and Linux servers, typical utilization is only 50% (see chart, below). Therefore, it's still not economical to use an on-demand model. With Savvis, costs are based on usage only.

Also, the virtualized-services architecture is built on automated software-management and software-provisioning systems that give visibility across the network, hosting, compute, and storage platforms. This visibility, in turn, would reduce our dependence on redundant hardware—whether it's on our premises or at a host facility.

We signed a contract in June 2004, and we haven't looked back since. It took fewer than 45 days to complete the conversion, and we've already achieved a major gain by reducing 18 servers to only six. We've also avoided the need to hire database administrators and high-level support people. In the process, we believe we'll save approximately $500,000 in the first two years of the arrangement.

We've also stabilized the application. For example, we had earlier problems with Macromedia's ColdFusion, a tool for building and deploying Web applications. We now realize that some of the problems were environmental-resource issues, not problems with the code.

That's not to say the conversion went without a glitch. In fact, some of our customers simply didn't want to change their domain name server (DNS), which we needed them to do. It may sound like a trivial concern, but their refusal to change caused us a great deal of grief during the conversion.

On the other hand, some of the conversion work has gone even faster and more easily than expected. For example, we recently needed to realign one of the applications between servers, and to tune the database. Though we expected the work would keep us busy over an entire weekend, we actually finished at about 5:30 p.m. on Saturday—a day ahead of schedule.

One common concern among potential users of shared-computing facilities is data security. After all, your data is running on the same systems used by your vendor's other customers. And in our case, much of the data is highly confidential. For example, we run a digital rights management (DRM) server for the chairman of one of the major movie studios. As you can imagine, this isn't data the client is eager to share with competitors.

We've found the virtual utility quite secure. I've met with the vendor's chief security officer, and we've thoroughly tested the security of the system to see how it behaves. We've seen no red flags. Our service provider leveraged hardware costs across many clients—something that's usually too expensive and complicated for a typical IT shop to do on its own. In addition, we run our own monitoring software on the system. When this software catches a disconnect, a downed server, or any other problem, it immediately sends us an E-mail alert. When we receive one of these flags, we then wait to see how much time passes before someone from Savvis notifies us. In every case, the phone call has come within just a few minutes.

Looking ahead, we have ambitious plans for the application. We plan to move to an N-tier application architecture using the Java 2 Platform Enterprise Edition (J2EE) environment. An N-tier architecture will let us create flexible, reusable code, by letting our developers modify or add only a specific layer, rather than rewrite the entire application. And J2EE is a powerful tool for building and deploying Web-based enterprise applications online.

We're strongly considering these new technologies because we believe the fulfillment business will grow dramatically, and we need applications that are relatively easy to scale. We're also looking to reuse components across our company as our workflow grows increasingly complex. In keeping with our companywide move toward outsourcing whenever possible, we plan to contract the development work offshore while maintaining the analysis and design work internally.

We're highly satisfied with the virtualized computing model. In fact, we're now looking to see if other applications could be moved to a similar setup. If it fits the hosting model, we're looking to move.

Mark Winter is executive VP of IT at Deluxe Laboratories, North America.

Please send comments on this article to [email protected].

See Related Articles:

Is Pay-As-You-Go Computing Viable? May 2004

Adaptive Pricing Comes Into Focus, June 2003

The Future Utility Of IT, November 2002

Vendors apply the term "utility computing" to products as creatively as the packaged-goods industry uses the term "all-natural." Storage-solution vendors use it to mean making almost unlimited storage automatically available to applications as needed. High-availability solution vendors use it to mean more resources and less human intervention. To others, it can mean grid computing or the use of virtual-server software applications.

The underlying themes in the various interpretations of utility computing encompass scalability, rapid reaction to changing business needs, efficient use of resources, higher availability, and greater automation—all very much needed by IT shops, of course. However, we predict that reaching these lofty goals will take several years.

The sudden urge for utility-like services stems from the proliferation of servers in the data center. For the past 20 years, line-of-business developers have been installing simple, multiuser applications on departmental servers. Eventually, application owners grew tired of being accosted by their office mates whenever the server crashed, and looked to data-center managers to take the servers into mainframe and midrange data centers. Once the principle was established that data centers could house servers, new server-based applications started to be placed directly in data centers at an accelerating rate, under a "one application, one server" operating model. The rise of the Internet brought Web servers to the data centers by the truckload. By the late 1990s, data centers had hundreds or even thousands of Wintel and Unix servers in their care.

As a result, most data centers today accommodate—rather than manage—the server environment, with the possible exception of servers for companywide applications such as E-mail. Aside from help-desk services, data-center support rarely goes beyond the OS or DBMS level. This becomes apparent when companies plan a data-center move or server-consolidation exercise. Records of which application or database is running on which server, interdependencies, technical contracts, and planned changes are typically out-of-date and full of gaps. Consequently, such projects start with a lengthy discovery process, which takes many months.

Another consequence is the proliferation of different OS and DBMS versions—often between 10 and 20 of each. It's a major and lengthy undertaking to migrate even a few dozen applications to a single OS/DBMS version; the lines of business have to find time to remediate and test their applications.

To make use of utility-computing products and solutions, data centers will first need to reach a point where they're truly managing the server environment and applications are no longer tied to specific servers. To do this, they must create up-to-date records of the environment, drastically reduce the number of OS/DBMS versions in use, and achieve some basic level of server consolidation. Only then can they start to deploy utility-computing solutions. For this reason, we believe that fully realizing utility computing will take another five to 10 years.—Malcolm Hamer, practice director, IT economics, Greenwich Technology Partners

Read more about:

2005
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights