Interop Data Center Chair Jim Metzler On Application Delivery

We caught up with Jim Metzler to discuss his plans for the Interop Application Delivery 2.0 track. Application delivery has always been important, but new demands are being made on IT to delivery applications to user where ever they are on what ever device they are using. Users are getting spoiled by ubiquitous bandwidth and more powerful computing devices like netbooks and PDA's. Getting the app to the user efficiently and securely is a challenge. Where that application resides such as a data c

Mike Fratto

April 20, 2010

5 Min Read
Network Computing logo

We caught up with Jim Metzler to discuss his plans for the Interop Application Delivery 2.0 track. Application delivery has always been important, but new demands are being made on IT to delivery applications to user where ever they are on what ever device they are using. Users are getting spoiled by ubiquitous bandwidth and more powerful computing devices like netbooks and PDA's. Getting the app to the user efficiently and securely is a challenge. Where that application resides such as a data center or cloud service certainly impacts IT's ability to deliver. These are some of the topics in the Interop Application Delivery 2.0 track.

NWC: WAN Application delivery is pretty well understood, so what else is there to talk about?

Metzler: Well, you're right, it's not as if anyone has pointed out the ten new problems with virtualization, that hasn't happened. I think one of the more interesting things relevant to application delivery is the emergence of virtualized application delivery appliances, and I have a session on that.

In the old days, we worried about remote users -- they aren't in headquarters, they are out in the branch offices, and I have my resources centralized in headquarters, so more people access these over the WAN, etc. That introduced delay and all kinds of issues. Public cloud means you have even more of those people accessing resources over the WAN, and that increases the need for optimization, if you will. Now, the more typical thing is not the branch office, but workers are out there moving around with a laptop or a smartphone, and while before they were maybe only sending e-mail from their phone, now they are running real business applications and even a small amount of packet loss can just kill throughput.

So, there are some new challenges such as delivering real applications to mobile users. There are new solutions for those problems, and virtualized appliances which are kind of interesting because you can at least, in theory, more easily move them to where you need them. It's the beginning of solving the problem of moving the VM around even though the rest of the system is physical and static. And, with CPUs getting more powerful, you see less often the need for dedicated devices to accomplish a majority of the tasks.NWC: When you say doing the majority of the tasks, there is the traditional thinking, and I've seen this with hardware appliances, hardware ADCs, where there are multiple tiers, the outer tier that faces the world, which will primarily do SSL offload, compression, caching, etc and CPU-intensive stuff, and then facing inward toward the application servers, there might be another tier that does load-balancing, etc.

Metzler: That's a fair comment, and it's all part of the shifting data center architecture. There are more tools now to play with.

Mike: So the networks are changing, and how we design our networks and how we treat our networks is going to have to change.

Metzler: Yes, and for the last decade it's not been questioned -- Access Distribution Core is the mantra -- and we just assume, this is how we design networks, and we have a whole generation of people who have been trained to think in a particular way, and hopefully they were taught to learn about new things, too, and not just ADC, but I fear the latter.

That they were trained on "here's how it is" versus "here's how to think about designing a network architecture."NWC: So in a public cloud environment, though, you don't have that physical machine to offload to. You just have software.

Metzler: Yes, there are some challenges there because the old ASP model didn't work for a number of reasons, one of which was there wasn't much virtualization, it was their own physical machine, and so there was no significant cost advantage of the ASP over the enterprise. And if there is a cost advantage now (emphasizing if), it's because the cloud providers have done a better job of virtualization, automation, etc than the enterprise has, and the advantage goes to them accordingly.

NWC: So going back to the mobility issue, a lot of the big service providers, Google, Salesforce, etc will do global server load-balancing (GSLB) and try to locate resources close to you, network close, for speed and performance rather than pulling everything back over. Do you see that becoming a more viable option for enterprise application deployment that's going to a more web-based application delivery model?

Metzler: Well, global server load-balancing has been around for a while, and to your implicit point, they haven't had as much of an uptake as you might think. But there is the real possibility of doing that, and I think that the concept of serving the user as close to the user as possible is another interesting idea. I'm thinking of Akamai not only for serving up the content, but they have recently come up with a web application firewall service, and they distribute that around their network of tens of thousands of servers. So it comes down to security. Some things are important to do on-site, and so you still want to have a firewall on-site, but others you can do "out there" because, say, if you try to stop that DDOS attack when it's coming down your pipe to the building it's already too late. So I think that kind of distribution of functionality -- not to the exclusion of anything else, I'm not saying we're going to go from this centralized approach to a decentralized one -- but offers more of a balance.

I think that's definitely a key part of next generation application delivery solutions, and yes, there might be a bit of an uptake on GSLBs, but I never see the enterprise shift what it is they are doing too quickly. There will be a movement in that direction, but we're going to spend the next 3-5 years rethinking and re-architecting the LAN and WAN and slowly heading in a new direction. You don't just pull everything out and put new stuff in. That's why some of these sessions at Interop are driving home ideas that we're going to pick up and carry into InterOp in NY, Virtual InterOp in May, and keep these conversations going and ferret out the questions and issues.  

About the Author(s)

Mike Fratto

Former Network Computing Editor

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights