2006: Storage Users to Watch
Ten organizations poised to make big storage news in the next 12 months
January 3, 2006
The customer is always right, especially if they spend enough money.
There was no shortage of rightness, or cutting edge projects undertaken by users in 2005, from high performance computing, through server and storage consolidation. Public and private sector, outsourcing, insourcing, open sourcing, you name it. The storage tires got a thorough kicking in the last 12 months.
We at Byte and Switch have been lucky to have had a front-row seat to lots of plans and implementations in 2005. What follows is a cheat sheet to the customers poised to to make even more headlines in 2006.
No. 10: USAF Takes Aim
Some years ago the U.S. Air Force embarked on a major storage overhaul, signing a five-year, $70 million deal with Lockheed Martin Corp. for one of the largest SANs ever. Since then, the Air Force has been constantly adding to its technology by signing deals with the likes of Microsoft Corp. (Nasdaq: MSFT) and Sun Microsystems Inc. (Nasdaq: SUNW) (See USAF Selects Microsoft and USAF Collaborates With Sun.)At the Storage Networking World show in October, Lt. Col. Karlton Johnson, a USAF technology guru and member of the Army War college, explained that the Air Force has already had some real successes with its storage infrastructure. Air Force personnel, he said, were able to perform 58,000 error-free backups in southeast Asia in the aftermath of the tsunami, as well as centralizing storage management in the devastated region.
But the Air Force is currently working on a number of projects that will put an even greater strain on its storage resources. These range from technologies to tackle friendly-fire incidents to new forms of unmanned aerial vehicles (UAVs).
Air Force chiefs are also wrestling with their own interoperability issues as technology standards for storage networking differ from command center to command center. With the Department of Defense still one of Washingtons big technology spenders, 2006 promises to be an interesting year for USAF IT projects.
News Analysis: USAF Issues Storage Challenge
News Analysis: Lockheed Soars on Air Force SAN Deal
News Analysis: Washington Cranks Up Contracts
No. 9: JP Morgan, Insourcing Trailblazer
Could this be the shape of things to come? The financial giant sent shockwaves through the banking industry a little over a year ago when it ended its $5 billion, 7-year outsourcing deal with IBM Corp. (NYSE: IBM). (See JP Morgan Ends IBM Outsourcing Deal, IBM Scores $5B Deal With JP Morgan.)The move to bring servers and storage back in-house ran contrary to current trends in the IT industry, although initial signs suggest that the firm has got things well under control.
JP Morgan revealed earlier this year that it has already replaced a supercomputer and a mixture of high-end servers with a 4,000-processor grid built from low-cost servers. The InfiniBand-based grid is now the firm’s technology backbone, linking 10 data centers across three continents, and execs expect some serious cost savings.
A number of storage, server, and software vendors are helping JP Morgan plough its lone furrow. These include Egenera Inc. and Sun Microsystems Inc., which is working on data archiving.
Taking servers and storage back in-house is not totally without precedent. Back in 2002, Bank One ended outsourcing contracts with AT&T Inc. (NYSE: T) and IBM Global Services , reportedly shaving $75 million off of its technology budget in the process. Only time will tell if JP Morgan can emulate this success.
News Analysis: Will More Banks Bet on Insourcing?
News Analysis: JP Morgan Goes Grid
News Analysis: Egenera Looks Beyond Blades
News Analysis: IBM/Morgan Deal Shares the Wealth
No. 8: Oak Ridge PloysScientists at Oak Ridge National Lab, birthplace of the atomic bomb, are planning the world’s largest supercomputer, supported by a vast amount of storage. Earlier this year, Oak Ridge officials told Byte and Switch they were launching a petaflop-sized machine by the end of the decade to cope with spiraling demands for computing power.
In case you aren’t sure, a petaflop is equal to a thousand trillion operations per second. This is three orders of magnitude more powerful than a teraflop, which is a trillion operations per second.
But with all this additional computing power comes a massive demand for storage. Oak Ridge execs are planning to use a whopping 40,000 disks for primary storage, totaling around a petabyte.
The lab is also looking to overhaul its power infrastructure to support this "mother of all computers." Thomas Zacharia, Oak Ridge's associate director, estimates that the new supercomputer will require around 40 megawatts, more than quadruple the 8 megawatts currently consumed by the Lab's systems.
Although supercomputing may be synonymous with cone-head academics and shadowy government agencies, a project on this scale could show enterprises how to handle their own data explosions.
News Analysis: Oak Ridge Plans Petaflop Supercomputer
No. 7: UPMC: From SANs To Spinoffs
Keep your eye on the University of Pittsburgh Medical Center (UPMC), which has moved more than 100 Tbytes of data to new SANs since signing an eight-year deal with IBM earlier this year. But this is just the tip of the iceberg for the 19-hospital health center, and plans are afoot to implement IBM’s SAN Volume Controller (SVC) to virtualize storage, as well as IBM Tivoli Storage management and monitoring software. The $402 million project is also likely to include additional storage tiers, an email archive system, a director switch upgrade, and maybe NAS filers.
The broader ramifications of this project could be significant. As part of the deal, UPMC will beta test IBM products and the two organizations will initially invest $50 million to develop technology for the health care industry. This figure, however, is expected to grow to $200 million each by the end of the deal.
Could UPMC turn its mammoth storage consolidation into a healthy profit by spinning off new businesses? Wait and see.
News Analysis: New SANs Heal UPMC Constraints
No. 6: Sandia Eats Linux For LunchNot to be outdone by their counterparts at Oak Ridge, scientists at the Sandia National Lab are pushing ahead with a number of notable projects. The lab’s Combustion Research Facility (CRF), for example, has joined a growing list of users eschewing expensive monolithic systems, by replacing a supercomputer in favor of a powerful Linux-based cluster of blade servers.
Whereas cost savings have prompted many users to deploy clusters, Sandia cites ease of use and scalability as the major benefits of the InfiniBand-based cluster. Earlier this year, Joe Oefelein, senior member of technical staff at Sandia told Byte and Switch that Linux is key to this. Because the operating system is so widely known, he boasted that “Some of the guys here can rebuild the kernels while they are eating their lunch.”
The blade cluster, which is supported by a 13-Tbyte disk array, is not the only cutting edge scheme taking place at Sandia. The lab’s ROSE Cluster, for example, which manages and analyses output from its Red Storm and Thunderbird supercomputers, recently set a new file system performance record. (See Sandia Sets Record .)
News Analysis: Sandia Blasts Off Blade Cluster
News Analysis: Luebeck Looks to Clusters
No. 5: LSU Reports For Duty
While people were evacuating Louisiana after hurricane Katrina hit in August, Louisiana State University's high-performance computing (HPC) staff was reporting for duty to support emergency services.The Baton Rouge campus became the center of emergency operations for the Department of Homeland Security’s Federal Emergency Management Agency (FEMA), the Red Cross, and other relief groups in the immediate aftermath of the storm.
As well as power, LSU’s director of HPC Brian Ropers-Huilman and his staff had to provide FEMA with storage for archiving all aerial photography taken of the affected coast. Although LSU did not have the capacity that FEMA wanted, Ropers-Huilman quickly upgraded the university’s Panasas Inc. ActiveScale Storage Cluster to cope with the increased demand.
The university had been beefing up its computing center since 2002 when it purchased a large-scale Linux cluster supercomputer known as Super Mike, and FEMA was ultimately able to reap the benefits of LSU’s file system technology. (See Panasas Clusters at LSU.)
Ultimately, LSU’s experiences underline the importance of preparing for every eventuality.
News Analysis: LSU Raises Storage Bar
No. 4: Consultancy Cuts BladesNew Energy Associates is looking for some big savings this year, in part by throwing away its blade servers and growing its storage network. The urge to save heat and power in data centers prompted this decision, deflating the notion of blade servers as a silver bullet for the data center.
In a nutshell, New Energy is looking to do some serious number crunching by combining virtualization and standard servers, without breaking the bank. The consulting firm is not the first user to turn its back on blades, but few enterprises have actually come forward and explained their reasons for doing so.
New Energy’s story prompted a flurry of activity on the Byte & Switch message boards earlier this year, with other users eager to find out more about the firm’s consolidation efforts. It will be interesting to see how New Energy’s ambitious plan works out during the coming year.
News Analysis: NewEnergy Chops Its Blades
News Analysis: Are Blades Cutting It?
News Analysis: Study Highlights Blade Disappointment
No. 3: Yahoo -- Beast, Fish & Fowl
Web portal or network wannabee? You hear the pitch from these guys at every tradeshow and business pub you see…"We want to be the multimedia destination, your content king," prompting sub-audible groans. But then you hear about the consumer and corporate programming they host. Or the recent deal where Yahoo will stream network television content to mobile users.And like every other portal/online auction/content aggregator/Webmail hoster out there, Yahoo needs a voice component. Thank heavens the company recently revealed its intention to get into the voice-over-IP market -- taking on Skype Ltd. , Google (Nasdaq: GOOG), eBay Inc. (Nasdaq: EBAY), and the Tupperware® lady.
And all that has exactly what to do with storage? When you're registering more than 2.4 million hits per day and 20 million customers a week, you want files and data that are immediately accessible, said Yahoo's global storage architect Ken Black in October. It's clear the company intends to match its competitors feature for feature and byte for byte in 2006. And that spells lots of storage capacity and plenty of overhead to accommodate demand.
It's not as sexy as a Victoria's Secret webcast, but the underlying storage that serves it up is just as strategic.
News Analysis: Yahoo Opens Bandwidth Bottlenecks
News Analysis: Yahoo Jumps Into Voice
No. 2: What's Good For GM Is Good For Storage
It's an IT lottery of sorts. General Motors Corp. is poised to begin awarding an estimated $15 billion in IT services and management contracts early this year. If even 5 percent of that is storage related, the winning vendor(s) are in for a $750 million windfall over five years.Four frontrunners have emerged, according to various press accounts. Capgemini , Electronic Data Systems Corp. (EDS) (NYSE: EDS), Hewlett-Packard Co. (NYSE: HPQ), and IBM comprise the shortlist, and some are giving IBM the inside track. But with HP also in the running, it's hard to imagine that two of the leading storage vendors won't find a way to derive some NAS, SAN or virtualization revenue out of the deal. The fact that both HP and IBM have strong footholds in the network management realm-both as software vendors and managed service providers-also has to help their chances.
For now, GM's CIO Ralph Szygenda and his inner circle are the only ones who know how this will all gin out, but the unparalleled size of this award makes GM an enterprise to watch in 2006.
News Analysis: Sun Brings Java Enterprise to GM
News Analysis: GM Buys Major IBM Supercomputer
News Analysis: IBM Speeds GM Crash Tests
No. 1: Google's Data Demons
The secretive search giant actually admitted to suffering the same storage pressures as everyone else earlier this month, suggesting that it will be adding capacity throughout 2006.
There have already been hints that Google, like other firms, is wrestling with its data demons. According to documents filed with the Securities and Exchange Commission (SEC) last month, the firm’s capital expenditures more than doubled, growing from $259.9 million in the nine months ended September 30, 2004, to $592.4 million for the same period in 2005. Google expects to spend more than $800 million on property and equipment, including IT infrastructure, to help manage and expand its operations during 2005.Storage undoubtedly plays a major part in this, as Google constantly spews new data-hungry products, from Google Earth to the recent launch of the Google Base hosting service. (See Google Launches Google Base.) Then there is the firm’s controversial book digitization project and the Gmail service, which now offers users up to 2.5 Gbytes of Webmail storage.
Whether Google chooses to explain how it is coping with all this raw data is doubtful, although you can be sure that there will plenty going on in the firm’s data centers to support its push for world domination.
News Analysis: Google Groans Under Data Strain
— James Rogers, Senior Editor, and Terry Sweeney, Editor in Chief, Byte and Switch
Read more about:
2006You May Also Like