A Private Cloud Is Called IT

The title of this blog comes from a sentence penned by Steve Duplessie in Why the Cloud will Vaporize, and it's a sentence that really speaks to me. Duplessie writes an evocative article about the cloud market, but he made a few points on the benefits of cloud services and then wrote that little nugget, which I want to expand on. Whether you call it private cloud or a data center, the automation technologies and processes that are being developed for cloud services will trickle down into your da

Mike Fratto

February 2, 2010

6 Min Read
NetworkComputing logo in a gray background | NetworkComputing

The title of this blog comes from a sentence penned by Steve Duplessie in Why the Cloud will Vaporize, and it's a sentence that really speaks to me. Duplessie writes an evocative article about the cloud market, but he made a few points on the benefits of cloud services and then wrote that little nugget, which I want to expand on. Whether you call it private cloud or a data center, the automation technologies and processes that are being developed for cloud services will trickle down into your data center and that's good for everyone.

I can't ever talk about cloud without first defining the term. I am using private cloud to mean a cloud-like service that is wholly hosted in your data center on your gear. Cloud means automated, scalable, flexible computing, networking and storage services that are largely virtualized. In effect, a private Infrastructure as a Service (IaaS).  If you call me out on my definition, I am simply going to refer you to this paragraph. (ha-ha)

Alright. I don't think that hosting a private cloud is going to net you the same economic advantages (if there are, in fact, any) that outsourcing to an IaaS cloud provider will have, because your company still has to make the capital and operational investments to build out and manage your private cloud. But what it is going to get you is an IaaS and the ability to unshackle your applications from hardware. I can't imagine there are too many servers that can't be virtualized, even if the hypervisor only runs one virtual machine.

Just virtualizing servers lets you move machines from server to server as needed with minimal, if any, down time. Since they are just files, you can take snapshots of existing systems to quickly restore if a server fails, a patch goes awry, or any other catastrophe occurs. The first time you can bring a downed server back to live in the time it takes to boot a computer, you'll be sold. Functions like live migrations are just the icing on the cake, but those are just part of the benefits of an internal cloud. The real benefit comes from automation.

I remember the days of having to make configuration changes and all the devices I had to login to for moves, adds, changes and deletes. It took forever and was error prone. Worse if I had to go pull cables. I longed for an automated system where I could, in a few clicks of a mouse, do the moves, adds, changes and deletes though a command and control console. I got pretty close with VMware and Symantec Altiris, but I never quite got all the way to a big red button. I needed to do a few things first:

  • Centralize storage on a SAN or NAS. Centralizing storage and making it available to all your servers is pretty critical. You typically can't, for example, do a live virtual machine migration without it. Plus with a central storage, back-ups, replication, storage, and access is all greatly simplified versus multiple storage locations.  This was an area I pretty much ignored except of basic file storage and thankfully, my NAS never died.

  • Unify networking to a single vendor and OS. One of the first things I did when I took over the lab was replace the organic network that had grown up over years with a single vendor and a single OS. I only had to learn one command line and I could make changes through the GUI or scripts that I developed. I'd never want to go back to a multi-vendor network again.

  • Automate and template as much as you can. When I talk to IT managers, this is often the hardest hurdle to get over--removing the human from the process. You can do it. You and your IT staff probably take the same actions over and over. It's a waste of time and resources. Once you do something a few times, you can probably create a template and automate it.

  • Take advantage of advanced networking features. When we had the lab at Network Computing, at least three times a year someone would put a rogue DHCP server that would eventually cause outages as network devices got RFC 1918 addresses. More than once, I spent hours locating where the rogue was located only to trudge through the Syracuse snow to unceremoniously rip an AP out of the network. Advances such as setting authorized DHCP addresses, managing QoS tagging and power management, which once set, take care of the piddly time sucks.

  • Unifying computing hardware. This is a hard one, since few of us have the luxury to buy the same equipment year over year, and even the same computing model numbers can vary in components. All the variation leads to image sprawl but the closer you can unify hardware, the better it's going to tie into an over all management strategy. Even basic hardware monitoring and server control offered on servers from Dell, IBM and HP are simplified with a single platform.

  • Take advantage of integration API and SDKs. I did a lot of automation using my mediocre Perl and shell scripting skills. The home grown programs don't need to be pretty, elegant or robust, just well-documented and functional. I set up scripts to check and restart services, send alerts based on down events,  monitor environmentals (temperature, humidity, activity), and I scripted installation of commonly used applications and a bunch of other processes I found myself doing over and over again. Better yet, get a copy of AutoIT  and use that. It's easier to learn than other scripting languages and there is an active community and modules available.

  • Automate network services. Get rid of static IP addresses and implement some kind of IP Address Management (IPAM) process to keep track of what is on your network. You can invest in a commercial IPAM product, or there are simpler open source IPAM systems. IPAM will be especially important when IPv6 starts to roll out, because you aren't going to be assigning IPv6 by hand any more.

  • Separate the management network from the data network. Anyone who has ever been caught short when they changed the interface IP address on a remote router and were promptly cut off knows what I am talking about. It's easy to do and hard to fix. Set-up a separate management network, and I mean a physical network, so that you will always have access to equipment barring a catastrophic equipment failure.

These steps will prepare you for automating your data center as well as make your management more robust and powerful leaving you with more time to work on more interesting tasks.

Read more about:

2010

About the Author

Mike Fratto

Former Network Computing Editor

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights