Intelligent network automation and an API-first approach are the answers to enabling NFV to make good on its numerous promises.
Network Function Virtualization (NFV) became one of the hottest topics in networking a few years ago, delivering promises of dynamic virtual networks and cost savings, and was being investigated and trialed in the labs of most major network operators. However, it’s no secret NFV has yet to see the explosive growth and anticipated benefits we all expected. For widespread NFV success to be realized, increased interoperability across vendor solutions and across individual components within those solutions is needed.
The NFV problem was never about a shortage of solutions. It was a problem of too many solutions that operated independently on specific components of the NFV environment with no interoperability. The operators bought into the vendor hype and were testing multiple tools, from multiple vendors, with each vendor touting theirs as the solution. The reality is that in its current state, the NFV solution is comprised of multiple tools in order to cover all use cases a network operator requires.
To support these claims, it’s important to take a step back and explore the problems currently facing NFV and how it got to this point. To date, NFV has been slow to deliver on its promises and the primary reason is the lack of interoperability between the multitude of tools that have been introduced. Instead of delivering on the vision of a single NFV Orchestrator that provides an integration point for existing OSS systems, and a single VNF Manager and a single Virtual Infrastructure Manager (VIM) to interact with the NFV Orchestrator, we have seen environments where specific uses cases leveraging NFV, such as SD-WAN and Virtual CPE (vCPE) have introduced multiple orchestrators, multiple VNF Managers, and often a hybrid cloud model that has resulted in a hodgepodge of management tools throughout the stack.
NFV challenges really began at the bottom of the stack with infrastructure. In the early days, the primary environment being touted for NFV use was OpenStack. Unfortunately, many companies had already invested heavily in VMware infrastructure, which presented some challenges with early NFV trials. These existing VMware environments were more suited for virtual servers and software used by IT, such as email servers and other legacy IT systems. As a result of the sunk investment in VMware and its lack of NFV suitability when compared to OpenStack, hybrid environments sprang up that included both OpenStack and VMware. This meant dueling management systems that included the management tools that came with each. Further adding to this complex hybrid environment was the introduction of container-based infrastructure and additional management tools. Add in all the public / private cloud options of AWS, Azure, and Google and the sheer volume of management tools become impossible to manage efficiently.
Issues also arose at the VNF Management layer. The VNF Manager orchestrates the activities of virtual network functions (VNFs) such as virtual routers, switches, and load balancers. For every vendor that introduced VNFs into the market, there was at least one VNF Manager introduced as well. This could be a new tool, or perhaps it was a new version of an existing management system being used for physical network elements that was modified for the VNFs. On top of that, the orchestration layer that contains NFV Orchestrators also saw many options from many vendors. Sometimes these options were packaged with the VNF Managers, and sometimes they were standalone orchestrators supplied by yet another vendor.