The Promise of FCoE
I think it has promise for enterprise users. It has more promise for FC vendors
February 18, 2009
10:50 AM -- I received an email the other day pitching a market analysis report on Fibre Channel over Ethernet (FCoE). Looking at it before my first cup of coffee kicked in, I started thinking about both FCoE and this kind of emerging market analysis. The vendor will remain nameless, as my observations and general snarkiness are based on many such reports from many firms over the years.
Firstly, have you ever seen a "State of the Industry" report in the first two years of an industry that didn't say, "This is just what the market needs and is the best thing since sliced bread"? After all, it's industry folks they're trying to get to spend $5,000 a pop for the reports. The guy trying to convince his bosses that they need to add CEE and/or FCoE to their switches or come up with a new line of CNAs is going to be glad to buy the report to support his position.
To some extent, market analysts are like stock analysts -- they never give a Sell signal. A recent story in The New York Times revealed that even over the past year, with the market tanking, over 90 percent of stock analysis resulted in Buy recommendations. Come to think of it, that's a good business. I could write Polyanna reports about each new technology as it comes around and would just have to sell two copies of each to make it pay better than journalism. (Of course, standing on the street corner selling apples might pay better than journalism, too.)
As to FCoE, I think it has promise for enterprise users. It has more promise for the existing FC vendors. As packaged, you'll still need special high-priced switches for five to eight years as the switch has to do CEE/DCE (which will be a strictly Foundry/Cisco thing for at least two years as there's no demand for the extra cost outside FCoE), as well as the naming and other FC-specific stuff that network guys have put in servers on the net (DNS, DHCP, etc.), as opposed to in the net itself. The switch guys can now leverage Ethernet commodity components and not have to develop, say, the PHY for 16-Gbit/s FC, which would cost big bucks and sell small numbers.
Using two to four copper (even if twinax, although 10GBase-T will come) cables per server makes much sense as opposed to two to four fibers and two to four coppers. One set of network management tools (both OpenView for link up/down and protocol analyzers) makes sense, too. Of course, the gleaming white papers I've seen supporting FCoE claim the average server has four Fibre Channel and eight GigE cables. After all, the math works better if you exaggerate the starting point just a little.For Qlogic and Emulex, it's about preserving the HBA business. Neterion, Broadcom, and the rest of the 10-Gbit/s Ethernet chip guys are building iSCSI into their silicon, since TOE (TCP Offload Engine) is general purpose and iSCSI is just a little more microcode. So the market for iSCSI HBAs remains slow -- Adaptec dropped out, Alacritech cut its prices in half, and Qlogic is the last player standing. FCoE is the only way to keep selling HBAs without 16-Gbit/s FC.
It, of course, also preserves FC management tools since it preserves zoning, naming, and all that stuff. Since I started as a network guy, that part of FC has always seemed both arcane and like reinventing the wheel to me. We solved lots of those problems in the 80s and 90s. Then the FC guys ignored all that work and created a set of protocols with issues like "needing a naming server because WWWNs weren't really like MAC addresses," but not allowing multiple WWWNs per device that required NPIV to fix. That wouldn't have happened if they had a UDP-like layer, which would have added negligible overhead.
I also wonder about the "converged" part. The storage guys are very protective of their turf -- it's one of the reasons low-utilization application servers have FC HBAs and there are no real iSCSI/FC bridges in the enterprise. And the network guys shake their heads at the suspenders, belt, duck tape, and thumbtacks storage guys use to get 7 nines of reliability in the storage net. I've seen four FC connections to redundant directors with redundant everything to support Dev servers. For a server to really have just two cables for redundancy, the storage guys and network guys need to share the links and some of the switches that carry both kinds of data. In some organizations, the CIO will make the two teams play nice together. In others, I see four cables -- but that's still an improvement.
Howard Marks is chief scientist at Networks Are Our Lives Inc., a Hoboken, N.J.-based consultancy where he's been beating storage network systems into submission and writing about it in computer magazines since 1987. He currently writes for InformationWeek, which is published by the same company as Byte and Switch.
Read more about:
2009About the Author
You May Also Like