4 NFV Myths Debunked
Network functions virtualization has spawned a number of misconceptions about what it is and how it works. Here's a reality check.
January 29, 2015
Remember the good old days, when transformative networking concepts only came around every five to ten years? Now, even before the marketplace has come to grips with software-defined networking (SDN), along comes network functions virtualization (NFV). As with SDN, many vendors claim to have a standards-based, open NFV solution ready to go and render current networks obsolete.
While the industry certainly made great strides last year, NFV still is quite nascent, generating many myths and misconceptions that are common with something so new and encompassing. Here are four noteworthy ones:
1. A virtualized network function (VNF) is simply a software-based replica of a hardware-based network element (NE).
Many vendors have, in fact, taken this approach, at least as an initial step and declared they have NFV. Yay! But taking an NE’s monolithic software and packaging it as a virtual machine image (VM) to run on a generic server is not sufficient to deliver the agility and efficiency that NFV promises. Most NEs comprise multiple component functions, each of which may need to scale differently depending on traffic load, and the number of users and sessions. With a single VM, everything has to scale together, even if only one component has run out of gas.
Fortunately, NFV has the concept of the VNF component (VNFC), so VNFs explicitly architected for the new paradigm can be disaggregated, with each VNFC in its own VM that can be scaled up/down or out/in independently as required.
2. All VNFs will run on server farms in large data centers.
Many functions, particularly those representing NEs currently running in central offices, mobile switching centers, cable hub sites, and content centers will mostly be located in data centers. Certain functions currently deployed as customer premises equipment (CPE), such as firewalls, will be located there as well. Such virtual CPEs (vCPEs) pair centralized VNFs with simple forwarding devices as premises-located demarcation points (demarcs). However, many VNFs either must or should be at the demarc. Encryption, WAN optimization, and network performance testing are examples of functions that lose all or most value unless performed at the edge.
3. All NEs will get virtualized and run as software in a data center or on-premises.
Well, in this universe, it isn’t possible to create and move photons (or radio signals) in software. The optical transport systems that transmit, propagate, and receive the bits that make up digital communications and content will remain as physical NEs, even as their parameters and behaviors become more software programmable.
Since it is cost-prohibitive to run fiber from all customer premises directly to a data center, these optical NEs will be located in intermediate sites, where it will make sense not only to groom wavelengths, but to aggregate and switch packets as well. Unless higher-layer functions are also performed in these systems, it may be more cost-effective to handle the packets in merchant silicon rather than an integrated server.
4. NFV is too complex and immature to implement anytime soon
Actually, this one is probably true to a degree. While reference architectures have been established, and initial functional requirements have been specified (e.g., by the ETSI NFV group), standards and/or de facto best practices and reference implementations are incomplete. Moreover, management solutions and operational procedures that can deal with network functions that can be dynamically located, interconnected, and moved in service are in their infancy.
However, this doesn’t mean there aren’t applications that are practical today. For example, delivering enterprise managed services based on deploying multiple VNFs on a server, perhaps integrated into the CPE demarc, rather than stacking individual appliances (router, encryption, firewall, or WAN optimization), is a simple yet attractive solution that doesn’t require an operational overhaul.
NFV has rightly received great attention, and none too soon. In fact, it makes sense that it grow up in tandem with SDN, as one could view it as simply extending the network software definition up the traditional OSI stack to Layer 7 while leveraging the application of SDN to connectivity and forwarding at the lower layers.
What are your favorite NFV myths? Do you think some of these will be debunked in 2015? Share your thoughts in the comments section below.
About the Author
You May Also Like