Politics And The Data Center
While ideally we would build our data centers with the products that best enabled us to provide services to the rest of the organization, making decisions about design philosophies like future-proofing, best of breed point products vs. integrated solutions, etc separate from turf and power politics, I've been reminded a couple of times recently that we don't live in an ideal world.
October 12, 2009
While ideally we would build our data centers with the products that best enabled us to provide services to the rest of the organization, making decisions about design philosophies like future-proofing, best of breed point products vs. integrated solutions, etc separate from turf and power politics, I've been reminded a couple of times recently that we don't live in an ideal world.
First, an organization hired me to prepare a report evaluating the SAN proposals for their New York office. I'm looking at the solution the guys in New York want and the one headquarters in Europe would like them to have.
Then, at an "Ask the Experts" session after my I/O Virtualization presentation at Storage Decisions NY, a group building a new data center wanted to talk about their options for a new virtual server farm. One of the first things they told me was that the network team was lobbyin hard for Cisco's UCS.
My reaction was "Of course they are." The network guys always want to buy from Cisco just like the old line mainframe guys always want to buy from IBM. Not only does that improve the position of their favorite vendor in the IT department continuing to buy from their incumbent vendors, it endorses the decisions they've made in the past. Add in that they're not only pushing for UCS servers, but also the Nexus 5000 top of rack switches and Nexus 7000 core switches, and the network group is looking at the new UCS equipped data center like it's Christmas.
Different groups within IT fighting for turf is at least as old as the PC. Twenty five years ago PCs and the servers that supported them forced their way into central IT after users bought them out of departmental budgets to solve problems the mainframe priesthood wasn't addressing. The hardware and operations side of today's data center are usually dominated by three groups: the systems group provisions and runs servers and compute resources; the storage admins run Fibre Channel switches, disk arrays and backup systems; and the network team connects it all together.The expansion of server virtualization and blade servers has started to blur the lines between these groups, established after years of guerrilla warfare. Are expanders and switches in blade chassis systems or network resources? Do we really want CCIEs poking around in vCenter to configure vSwitches? Where do the new lines get drawn?
Things get even more complicated with consolidated and virtual I/O. Does the FCoE top of rack switch belong to the network group or the storage group? These are questions your organization will have to work, or more likely fight, out before you move to the brave new world of converged networking.
Read more about:
2009About the Author
You May Also Like