Live From SNW
Read a collection of my updates from two days at Storage Networking World. The focus this year was on using new technologies to allow the IT staff to become more
April 9, 2009
By George Crump, April 9, 2009
10:30 AM -- The final afternoon and the final push at Storage Networking World in Orlando on Wednesday started off with a meeting with Data Domain. The company was mostly highlighting its already announce OS upgrade and its new mid-range de-duplication product. The new OS, available to all current customers, accelerates performance anywhere from 50 percent to 100 percent, depending on the platforms, the company says. Using Data Domain's Stream Informed Segment Layout, they can achieve continuous improvement in throughput without the use of additional controllers, compression, or high-speed disk caches.
What I still find fascinating is how much the support of Symantec's OST helps in performance. On a DD690, they can achieve backup throughput of 2.7 TB per hour with 10 Gigabit Ethernet. This is an interesting combination compared to CommVault's strategy. While it remains to be seen which method users choose, it is a compelling alternative.
Next up was HiFn, and they certainly get the award for best new product name at the show -- the Bitwakr. This is essentially a de-duplication card that inserts into any Windows server (more platforms are coming) and provides de-duplication and compression. The company is being realistic with how much de-duplication you can expect, citing 3x for de-dupe and then another 3x for compression.
The software component runs above the dynamic disk driver and intercepts calls to specific disks that you select for de-duplication. As is often the case, there is a performance hit -- the card today can perform at about 30 Mbit/s. That speed will increase in the future, but even now -- for the right workloads -- it is certainly acceptable, and many users may not even notice a performance impact. HiFn has priced the card at only $995, so dipping your toe in the waters is not painful at all.My final meeting of the day was with Tarmin, which has a disk archiving platform with the data movement software built in. As I said yesterday, disk archiving is a technology that can reduce both capex and opex costs at the same time. The technology, GridBank, does this by leveraging commodity, heterogeneous storage, and servers. The result is a software front end that has integrated content search, with the ability to publish and share that content as needed.
Last fall's show was all about reducing capital spending with technologies like de-duplication and compression. There was also a lot of talk about Cloud Storage. Those themes were still present at the show this year, but the focus was more about using all these technologies and others to allow the IT staff to become more efficient. This is an important bridge for suppliers and customers to cross.
Almost every solution today can be packaged as a way to save you money. The question to ask is: Can it do that and make you more efficient?
For the most part, spring SNW is in the books. It is finally warming up here, so I am off to the beach to get a little R&R this weekend.
2:20 PM -- I am racing from one vendor meeting to another at Storage Networking World in Orlando. I am finding the pressure of doing the blog updates to be very helpful. It makes me pay attention, and then having to re-write my thoughts crystallizes things in my mind much more quickly. Still, it can be a bit crazy.Enough whining and complaining. Permabit continues to march forward with its disk-based archiving solution, with a focus on improving performance and being ever watchful of the price per GB. Expect them to crack the $3 per GB barrier soon. Disk Archive is one of the few storage optimization technologies that also increase IT efficiency, something I will be harping on quite a bit over the next few blog entries.
Speaking of efficiency, Virtual Instruments is tackling an angle of storage and server virtualization that seems to get lost in the conversation -- performance. While there are a few good tools for managing capacity, there are not many that address I/O capacity and can tell you from a performance perspective which virtual machine should go on which virtual host and where in the SAN should that host be connected.
Not having this information makes it tough to virtualize what I call the hard stuff. So far, virtualization has been about virtualizing test and development more so than Exchange and SQL. Tools like this allow you to know what workloads should be placed on which hosts and then monitor them for I/O issues.
Storserver was next up. They are an interesting company that has attempted to make Tivoli Storage Manager (TSM) an SMB product by putting it onto an appliance and improving the interface. What users get is a turnkey appliance that is ready to take over the data protection process.
What Storserver is highlighting here is the ability to improve TSM's VMware protection capabilities by taking a lot of the scripting out of the VMware Consolidated Backup-to-TSM integration process. More importantly, it has designed a specific interface just for the VMware admin to back up and restore its particular machines. Considering that many times the VMware administrator is also in charge of the backups of that environment, that approach may capture some attention. The company has also made the VMware module available to stand-alone TSM customers as well.HP and I had our first time zone transition issue of the event, but we managed to squeeze in a meeting. HP is really building the unification business, focusing on moving data and applications closer together. In a way this makes sense. A storage system is really a sophisticated software application that manages storage running on a redundant set of servers, and often they are Intel based already. So just move that to the same box that is your virtualization host and in 4U of rack space you could have 30 virtual servers, storage, and the software to manage all that. Interesting strategy, but I think a lot of storage vendors will have something to say about that.
One of those vendors is 3PAR, who this week announced their F Class midrange array. This system can have two or four storage controllers and scales to almost 300 drives. What 3PAR claims is unique is their mesh controller architecture. Unlike the traditional mid-range box that maps 1 LUN to 1 Controller with the other being used for redundancy, in the 3PAR Mesh all the controllers are active against all the storage pools. Additionally 3PAR claims that there is no performance drop-off as you add more and more drives to the unit. It maintains its performance at 300 drives, unlike other architectures that face a significant performance drop-off as the drive count reaches 50 percent to 75 percent of maximum capacity.
1:25 PM -- There is no shortage of product announcements at a show like Storage Networking World in Orlando, and companies that don't have anything new still want to talk about their product lines. Storwize did have something new. They offer an inline real-time compression tool that provides 50 percent to 80 percent data compression of primary storage with little to no performance impact. At the show they introduced the STM-6000i, which improves performance and compression rates. Storwize is making the case that storage optimization on primary storage has to come with no impact to performance. All the other technologies available affect performance, the company says, which is OK for secondary storage but not for primary.
One of the more interesting meetings was with Data Robotics, which unveiled a Pro version of its Drobo, a storage array for SMBs. The new model now includes iSCSI support along with USB and Firewire. The unit has eight bays and automatically configures data protection as you add drives. You can buy the unit with drives already in it or add your own and they can be mixed capacities. Very cool. What amazes me the most is that the DroboPro resolves some the issues that have been haunting the enterprise, like mixed drive size and adjusting data protection. It is data aware with a capacity indicator on the front of the unit. I might have to actually break down and buy one.
Another vendor targeting the SMB market is backup specialist Axcient, which has an appliance that goes at the place of business and then replicates the backup data to their storage center. What makes these guys unique is how they give a reseller that sells to these smaller businesses the ability to stay involved in the process. The reseller can drop off the on-premise appliance, configure it remotely, and then monitor it through a Web page. Lastly, the on-site appliance can also be used as a virtual failover server -- most SMBs can't afford a secondary server that sits idle until their primary server fails, so if that happens they suffer. Axcient can allow the fallen server to run as a virtual machine on their appliance.11:50 AM -- As I am learning, it can be a mistake to overbook meetings at a tradeshow -- especially when there are lots of companies with interesting announcements and things to say. You can never spend as much time as you would like talking to any one of them.
Take LSI Logic, which showed 1 million IOPs on a standard Windows server running SAS drives with three of its controllers. Very impressive. LSI continues to dominate the storage-related silicon that goes on server-attached storage and is well positioned to take advantage of 6-Gig SAS technology.
And then there was cloud storage provider Nirvanix, which announced its relationship with Ocarina to optimize data before it moves to the cloud. Leveraging Nirvanix Cloud NAS, customers can now significantly reduce the amount of data that is sent across the Internet to the Nirvanix storage data center. Ocarina is on a bit of a roll, with OEMs looking to offer optimized storage. It has struck deals with BlueArc, HP, Isilon, and now Nirvanix, which, for its part, continues to extend its reach, with more than 400 customers.
9 AM -- I attended the Brocade press conference, where they announced and showed their FCoE products. In a later one-on-one meeting with Brocade, we went through the strategy in detail. Their FCoE plans are a very straightforward top-of-rack strategy. The switch will have 20 10-Gbit/s ports for converged Enhanced Ethernet (CEE) and eight 8-Gbit/s Fibre Channel (FC) ports. This falls in line with the recommended deployment strategy that we covered in the unified-infrastructure, single-fabric data center article in InformationWeek.
What is interesting is that Brocade has essentially held serve in the FCoE/Unification match with Cisco, yet they continue to also invest in FC and are openly committed to deliver 16-Gig Fibre Channel. Cisco, which was not at the show, has not made a similar commitment.I also visited with NetGear, which is expanding its "prosumer" and SMB NAS Storage business with 2-bay, 4-bay, and 6-bay systems. Straightforward and low cost, these systems would seem to be well suited for both of those markets.
4 PM -- Riverbed Technology, which last year announced a primary data de-duplication product called Atlas, has slowed down that effort a little to focus more on its core competency -- WAN optimization. Smart move. Focus on what you do right now to reduce costs and increase user efficiency. Good strategy.
To this end, Riverbed is focusing on how its WAN optimization can save money and increase efficiency. Reducing bandwidth by 90 percent allows for the reduction of the data infrastructure in branch offices. Further, they have worked out relationships with both VMware and Microsoft so that services can be moved to its appliance, further reducing the amount of hardware needed for the remote office. With WAN optimization you reduce the need for bandwidth need, eliminating a need for an upgrade. You also can also get better replication performance by reducing the lag between the DR site and the primary site.
Then came EMC, which had no major announcements at the show but re-emphasizing its recent Celerra announcement of upgraded performance as a result of using the CX4 back-end and, of course, its entry into primary storage de-duplication. This sparked quite a bit of discussion about the role of primary storage de-duplication.
My opinion is that other than VMware images, the return on investment for primary storage de-duplication is questionable because for de-duplication to be effective you need duplicate data. While clearly there is duplicate data on primary storage, there is no where near the amount that there is on repetitive backups.In primary storage, it may make more sense to compress all data using real-time compression technology like that from Storwize as opposed to de-duplicate some data. You can also use either in conjunction or stand alone, a product like Ocarina Networks de-duplication solutions that more thoroughly investigates data for a deeper duplication analysis and then can also migrate it to a different storage tier if you choose.
Earlier in the day Symantec had a briefing with their new CEO, Enrique Salem. The focus here continues to be integrating the various components of security and storage as a result of the Veritas merger. Some integration has been accomplished -- sharing of devices, for example -- and more is on the way via a single GUI, but that will take time to develop.
In the meantime they are trying to focus on establishing their individual products as best of breed within their given category as well as leveraging others like Data Domain with their Open Storage Technology API. It is an interesting approach. What strategy makes the most sense? Commvault's do it all approach or Symantec's?
12:25 PM Update -- My first meeting was with Hitachi Data Systems' Hu Yoshida and Eric-Jan Schmidt. Hitachi seems to be one of those companies that is doing well despite the economic news -- double-digit growth in most areas and continuing to innovate. Nothing really new from HDS at Storage Networking World. Clearly they have a lot invested in their virtualization platform with the Universal Storage Platform (USP) product.
What's interesting in this economic downturn, as opposed to others, is that customers have a choice. In addition to driving down costs, you can increase efficiency, meaning you can make your IT staff more productive. Whether you are using gear from 3PAR, HDS, or Datacore, it is critical to use virtualization to reduce the amount of storage capacity you have to manage while at the same time making your IT staff more productive.Next up was CommVault. I spoke with Michael Marchi, the VP of product and segment marketing. They are clearly not making friends within the de-duplication community, going head-to-head with the likes of Data Domain, Sepaton, and Falconstor with their own software-based de-duplication. Here CommVault and HDS are in agreement -- de-duplication will just become a feature either implemented through software or through a virtualization appliance like USP. While I am sure that Data Domain and Sepaton will have something to say about that, it interesting to see this stance being taken.
For an existing CommVault customer, this certainly makes sense. The fly in the ROI model will be in the rest of the accounts that don't have CommVault. Will customers really change out their backup applications to get de-dupe and other capabilities, especially in this economy? Or will they try to fix and tweak the edges of the environment without doing a major overhaul?
Virtualization of the storage infrastructure and de-dupe in backup are here to stay -- how it will be utilized and where it is going to be implemented is still hotly debated. Most likely there will be room for many different implementations based on what the customer needs and what the IT budgets allow.
11:20 AM -- Today and tomorrow, Byte and Switch and I will be trying an experiment of sorts. I will be providing live updates from Storage Networking World in Orlando. Between the two days, I have 27 briefings scheduled and, of course, time set aside to walk the exhibit hall as well as bounce into a few of the presentations. Then there are the best meetings, the ones where you tackle people in the hallway and get the real information.
Today's cast of characters -- I mean meetings -- starts with Hu Yoshida, CTO of Hitachi. Hu and I have been having an interesting conversation about virtualization in my other blog on Information Week, a sister site to this one. Then it is on to Commvault, where they will explain to me how a single application is going to take over the world and solve all our problems. Third up is Riverbed, which is going to explain how the primary storage de-dupe product is coming along. Then EMC is going to beat me up -- I mean, meet with me -- about their disk backup and de-duplication plans. Then I have been invited to lunch with Symantec's new CEO, Enrique Salem.In the afternoon, it is off to the Brocade press conference to announce the companys vision for server consolidation and network convergence, along with several related product introductions. After that I have a meeting with a Wall Street storage analyst to compare notes, and then I'm meeting with Netgear about their new SMB storage products. Next up is LSI for a discussion about their upcoming products. Then Nirvanix will provide me an update on how their Cloud Storage strategy is going.
But wait, there's more. In the evening I will have a quick meeting with Storwize to discuss their latest products and find out why they are marching through the inline compression market unabated. Another meeting with Brocade follows, this one with its CTO, Dave Stevens. I'm also meeting with Data Robotics, where I am hoping to score their just plain cool Drobo (more on that later, too). Finally -- finally! -- I will wrap up the evening with a finally a meeting with Axcient to discuss their new SMB Cloud Backup Strategy.
I am ready, laptop, broadband card, and several 1-liter bottles of Mountain Dew. Stay tuned: My goal is to provide an update after almost every meeting. Also pray for our poor editors, who are going to have to edit some very on-the-fly posts.
— George Crump is founder of Storage Switzerland , which provides strategic consulting and analysis to storage users, suppliers, and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.
InformationWeek Analytics has published an independent analysis of the challenges around enterprise storage. Download the report here (registration required).6668
You May Also Like