Is Mainframe Virtualization An Alternative For Open Systems Shops?

Can it be that mainframe computing and virtualization is an option for open systems shops without a prior mainframe history?

June 20, 2009

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Seventeen years ago, the word was that mainframes were dead according to the New York Times Networking; Just How Dead Is The MainFrame?, and that distributed computing was the future. The era spawned rows of servers in data centers, with challenges to seamlessly operate networks that were often high on cost-effective servers but low on management tools. Virtualization now headlines every organization's agenda, as IT works to reduce equipment footprints in data centers with open system virtualization solutions like VMware.
However, a new sound is rumbling from under the virtualization din. Can it be that mainframe computing and virtualization is an option for open systems shops without a prior mainframe history?
Move to Transzap, a software as a service (SaaS) provider of analytical reporting to 4,200 customers in the oil and gas industry. As a SaaS provider, Transzap competes against the internal IT departments of its customers and against other industry SaaS providers for the service it provides. "What we offer is mission-critical service to our customers, and to the oil and gas industry," said Peter Flanagan, Transzap's CEO. "One of the things we must consistently do is to meet all of our SLA (service level agreement) commitments."
Flanagan says that a major challenge for many SaaS companies is guaranteeing uptime and availability to their customers. "The normal SaaS uptime standard that SaaS providers have been willing to commit to has been 99.5 percent uptime, but the industry is changing," said Flanagan. "The new standard that customers are asking for, and which we set for ourselves, is three 9's uptime of 99.9 percent."
It was the three 9's uptime guarantee that Flanagan wanted to provide Transzap customers that got him thinking about new ways to approach his data center, which was a distributed architecture of Dell and Linux servers. "We had lots of boxes and racks," said Flanagan. "We were finding that as we continued to grow, we were running more instances of systems and it was becoming harder to scale out our data base. Our payloads were growing and we were also adding complexity. The total scenario made it much more difficult to deliver the quality of service our customers were expecting--and our costs were going up."
Flanagan and his staff also faced a quality of service issue in storage array failures for the Intel-based boxes in the data center that was beginning to affect availability. "The question for us was, how do  you manage the hardware layer of the data center when your staff core competencies are in software and operating systems?" asked Flanagan. "We could encounter a RAID issue for a critical subsystem, and it was hard to track down the problem and specify  it to a particular vendor's SLA or service contract with us. This was because we had the Linux operating system vendor, the hardware vendor, the database vendor--and they were all telling us it was the other guy's fault. Meanwhile, we were orchestrating a failover with clustered databases....We concluded that for all our Dell  and Intel boxes, invariably our suppliers were putting us in a position  where they were relying on us to make the diagnosis."  
As Transzap considered virtualization to reduce the number of servers in its data center, its first instinct was to look at an Intel-VMware solution.
"Instead of buying eight to 16 new boxes from a commercial supplier, we felt we could reduce server sprawl and virtualize by using  Red Hat and SUSE Linux on commodity Intel hardware with a VMware Hypervisor," said Flanagan. "But then we asked ourselves, what kind of service would we get? This service strategy hadn't worked out for us in the past, so why should it work now?"
Open to all options, Transzap took an unprecedented step at looking at an IBM System z mainframe. "It was unprecedented because we were a distributed, open systems shop without any mainframe experience--but there was this idea that possibly a System z could run a series of virtual Linux machines," said Flanagan. "We started to investigate this. The ultimate drivers for us were that the solution was less complex than anything else we had looked at, and it offered greater availability for our applications."
"There is an interesting dynamic in the market where growth companies become first-time buyers of System z, essentially making a two-for-one move," noted Joe Doria, IBM System z Marketing Director. "One is leap-frogging their competitors by deploying an enterprise-class platform that is secure and reliable, and two is running a flexible infrastructure that does not have to stretch to keep pace with growth."
Since Transzap wanted three 9's system availability, the company moved all of its mission-critical Linux systems to the System z, while maintaining a mix of Intel and Windows servers to run applications with lower availability requirements.
"You might call this a "repurposed" IBM System z because we didn't have a history with the mainframe and were using it strictly to virtualize our data center, and to improve reliability and quality of service," said Flanagan. "I can also tell you that since June 2008, when the System z first went into production, the box has never been rebooted.....Competition is very intense in the SaaS space, and with the re-architecture to System z, I'm sleeping easier at night."

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights