Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Traditional IT And The Road To DevOps

Gartner’s paradigm for CIOs, Bimodal IT, puts forth the notion of two modes of IT operations. One practices DevOps and agile development and uses cloud infrastructure to be an innovation rocket engine that provides competitive oomph based on the best, brightest and newest. The other is focused on quality, waterfall processes, efficiency and productivity to pull along legacy yet still critical applications and infrastructure. Can traditional IT realistically change its pace and learn new tricks? The experience of a major telecom operator offers some interesting lessons.

Telecom operators are about as traditional IT as it gets. They have the ultimate in legacy applications, their brownfield infrastructure spans decades of deployments, and they have a significant mix of physical and virtual servers alongside tons of hard to manage networking gear. If a telecom operator can significantly change how it operates, then just about anyone can.

Mission critical

This operator, headquartered on the East Coast of the U.S., offers many services, but we’ll zoom in on a team that deals with consumer-oriented voice and data services. This IT team’s responsibility is to support and perform the evaluation and certification of changes made to nationwide services used by millions of customers. In other words, it’s mission critical. Quality is a key performance indicator, while getting new features to market is important to competitiveness.  The team not only has to perform its own testing, but support internal software development teams in multiple locations across the country, as well as external contractors and vendors who are located around the world.

Before the team started its modernization journey, its modus operandi was like a lot of IT teams: highly manual and time-consuming. When the billing department would request the setup of a production-like replica of the service network, it could take weeks to put together. These requests came asynchronously from multiple teams. Oftentimes, vendors and contractors would have to fly to the U.S. from foreign countries to test their software on these laboriously arranged infrastructure environments.

The lag times in arranging all of this and the difficulty in providing infrastructure access meant that in true, waterfall fashion, teams would have to cluster their certification efforts into a long, “integration hell” period at the end of an overall release cycle. Worse, since the work was done manually, nothing was standardized or consistent. This in turn meant that it was very difficult to create predictable outcomes.

Shifting to DevOps    

The team had a vision, though. These IT pros knew that they couldn’t keep going on like this, so they decided they would take a journey to a DevOps destination. However, they faced major obstacles in that road. The first was the unwieldy nature of their infrastructure. To perform relevant certification of one component of the service (such as the billing server) required putting together a replica of the entire service network, which was comprised of dozens of servers, VMs, network switches, and hardware appliances.

So, the team's first step was building an Infrastructure-as-a-Service that could offer self-service access to complete infrastructure environments. Given the legacy, bare metal and networking pieces in play, this project took about five months to accomplish.

In parallel with work on the infrastructure automation, the IT group started capturing different teams’ key certification tests in automation and building out a master certification routine. As this certification expanded, every time a new code update from any team was run through it, quality increased in a cumulative fashion. It took about four months to release the first iteration of this certification routine. Then, the team turned the certification into a self-service offering available to launch on top of the self-service infrastructure environments that the tests were developed against.

The outcome

What results did this telecom operator team see? I’ll start with the automated certification routine because it became useable first. The first time the team ran the master certification, it was able to provide 33% greater test coverage in a fraction of the time when compared to manual testing. However, the shift to a self-service model for infrastructure and certification made a bigger difference. In fact, it fundamentally changed business operations for all teams concerned.

The team shifted from a location and time of day/day of week/time zone-constrained way of doing business, to a 24/7 global service model. Any team could access everything they needed to certify  software updates without lag time, which encouraged more incremental testing. Infrastructure and certification services were available around the clock from anywhere in the world, which erased the obstacles to effective technical collaboration from contractors and vendors.

Gone were the costly and time-consuming travel and visits from foreign countries, and full access meant that these external teams could be required to certify beta code before releasing new functionality rather than serving up unproven code when a release cycle was behind schedule. The result was much smoother software integration, less blocking bugs, and a 20% reduction in the overall certification cycle for new releases of functionality into the service within twelve months.

Since the IT team was no longer in the manual assembly business, it was able to repurpose 15% of a 40-person team to innovation initiatives rather than lights-on monkey work. As a bit of gravy, the increased efficiency meant that the team doubled its effective infrastructure utilization immediately, which curbed capital spending on new equipment while  productivity increased.

More work ahead

What’s particularly compelling about this story is that this team of infrastructure pros is nowhere near the end of their journey.  They aren’t doing continuous cycles, and they aren’t doing automated deploys. They wouldn’t say that they are yet practicing DevOps. In fact, they are definitely still in a waterfall process. They are still primarily using their legacy application lifecycle management (ALM) tools rather than a DevOps tool chain.

Yet, the increase in automation enabled a more agile, incremental waterfall cycle. Standardization and self-service enabled greater collaboration. And the table is set to progress towards goals like continuous integration.

This is a story of modernizing traditional IT as a journey. There is no “beam me up, Scotty” for this team, but taking concrete steps has made a huge difference nonetheless, putting it that  much closer to its DevOps goals.