Is ILM Finally Ready For Prime Time?
It's an undeniable fact that most organizations are drowning in unstructured data. Yet despite the general acceptance of the concept that files should be stored in ways commensurate with their changing value over time, few organizations really manage their files well. Vendors have made several attempts at making a buck with tools that automagically manage unstructured data, calling it HSM then ILM, with little acceptance and even less profit.
November 20, 2009
It's an undeniable fact that most organizations are drowning in unstructured data. Yet despite the general acceptance of the concept that files should be stored in ways commensurate with their changing value over time, few organizations really manage their files well. Vendors have made several attempts at making a buck with tools that automagically manage unstructured data, calling it HSM then ILM, with little acceptance and even less profit.
After performing an autopsy the last set of ILM vendors over drinks with some of their former execs and conversations with IT professionals, I think I understand why FAN and ILM joined HSM in the TLA (Three Letter Acronym) graveyard.
First, the declining cost of storage, driven in part by corporate acceptance of SATA drives, has made the Doritos method of file management (crunch all you want we'll make more) affordable. For 10-15 years we've periodically done a fork lift upgrade from NetWare and Windows file servers to several generations of NetApp and Celerra NAS just copying the data to a bigger system with bigger drives. If the system filled up in between technology refreshes, an additional tray of 250GB or 1TB drives was cheaper than an F5/Acopia virtualization switch or Scentirc software, and it didn't require a major project to implement.
Secondly classification and data migration tools have been primitive, complex and expensive. $10,000 a terabyte is a lot to pay for a product that migrates files to a lower storage tier based on the file system's last accessed date. Especially when that last accessed date may reflect the last time data was migrated to a new folder by a junior admin who didn't think about retaining metadata, or by a user who did a search for documents containing the work kumquat. Add in that the migrated files are replaced with stubs that recall files from the lower tier when that user does a content search and it just didn't seem worth it.
Most significantly, IT doesn't know, or care, enough about the data to define the classification rules. The IT guys, especially the storage guys, are primarily worried about keeping the OLTP systems running smoothly. That's where the company makes its money. The users, who know that the PowerPoint presentations from last year's sales meeting are probably never going to be used again, have no incentive to help.I think the business and data center environments may have changed enough for another try at the ILM concept; of course we'll have to give it a new name. Just keeping the data on the primary NAS is getting more painful than just the cost of storage. The data center is full and out of power. The budget's been cut and isn't coming back any time soon. Most painful, the huge pile of .MP3s and old spreadsheets is taking longer and longer to backup and manage.
System tiering, like Compellent's, EMC's FAST and Symantec's Dynamic Storage Tiering, show a lot of potential for SSD applications and enabling higher drive density but don't address the management and backup problems. Thankfully, past failures haven't kept new vendors like Autovirt and Seven10 from taking another shot at the problem.
Disclosure Statement: I have a business relationship with Symantec. Hopefully this won't ruin it.
Read more about:
2009About the Author
You May Also Like