Is ILM Finally Ready For Prime Time?

It's an undeniable fact that most organizations are drowning in unstructured data. Yet despite the general acceptance of the concept that files should be stored in ways commensurate with their changing value over time, few organizations really manage their files well. Vendors have made several attempts at making a buck with tools that automagically manage unstructured data, calling it HSM then ILM, with little acceptance and even less profit.

Howard Marks

November 20, 2009

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

It's an undeniable fact that most organizations are drowning in unstructured data. Yet despite the general acceptance of the concept that files should be stored in ways commensurate with their changing value over time, few organizations really manage their files well. Vendors have made several attempts at making a buck with tools that automagically manage unstructured data, calling it HSM then ILM, with little acceptance and even less profit.

After performing an autopsy the last set of ILM vendors over drinks with some of their  former execs and conversations with IT professionals, I think I understand why FAN and ILM joined HSM in the TLA (Three Letter Acronym) graveyard.

First, the declining cost of storage, driven in part by corporate acceptance of SATA drives, has made the Doritos method of file management (crunch all you want we'll make more) affordable.  For 10-15 years we've periodically done a fork lift upgrade from NetWare and Windows file servers to several generations of NetApp and Celerra NAS just copying the data to a bigger system with bigger drives. If the system filled up in between technology refreshes, an additional tray of 250GB or 1TB drives was cheaper than an F5/Acopia virtualization switch or Scentirc software, and it didn't require a major project to implement.

Secondly classification and data migration tools have been primitive, complex and expensive.  $10,000 a terabyte is a lot to pay for a product that migrates files to a lower storage tier based on the file system's last accessed date. Especially when that last accessed date may reflect the last time data was migrated to a new folder by a junior admin who didn't think about retaining metadata, or by a user who did a search for documents containing the work kumquat. Add in that the migrated files are replaced with stubs that recall files from the lower tier when that user does a content search and it just didn't seem worth it.

Most significantly, IT doesn't know, or care, enough about the data to define the classification rules. The IT guys, especially the storage guys, are primarily worried about keeping the OLTP systems running smoothly. That's where the company makes its money. The users, who know that the PowerPoint presentations from last year's sales meeting are probably never going to be used again, have no incentive to help.I think the business and data center environments may have changed enough for another try at the ILM concept; of course we'll have to give it a new name. Just keeping the data on the primary NAS is getting more painful than just the cost of storage.  The data center is full and out of power. The budget's been cut and isn't coming back any time soon. Most painful, the huge pile of .MP3s and old spreadsheets is taking longer and longer to backup and manage.

System tiering, like Compellent's, EMC's FAST and Symantec's Dynamic Storage Tiering, show a lot of potential for SSD applications and enabling higher drive density but don't address the management and backup problems. Thankfully, past failures haven't kept new vendors like Autovirt and Seven10 from taking another shot at the problem.

Disclosure Statement:  I have a business relationship with Symantec. Hopefully this won't ruin it.

Read more about:

2009

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights