OpenFlow Test Deployment Options
OpenFlow can be introduced into production networks in several ways. I’ll look at two options for deploying OpenFlow that network engineers can use to test the protocol and get their feet wet with SDN.
April 8, 2013
In a perfect world, software-defined networks could be deployed from the ground up, with new hardware, to avoid the need to support legacy architectures and/or protocols. However, that’s an unrealistic option. In most cases, enterprises will experiment by running hybrid networks using traditional switching and routing mechanisms alongside an SDN environment.
As network engineers digest the concepts of software-defined networking, the next step is to get their feet wet by playing with SDN technologies and protocols. One of the core protocols associated with SDN is OpenFlow, which programs hardware and virtual switches. This article will look at the use of OpenFlow because most networking vendors support OpenFlow firmware in beta or in production products today.
Early adopters of OpenFlow applications need blueprints for how to integrate OpenFlow hardware into existing networks. This allows for preliminary use case testing and adoption. There are a number of ways to integrate pockets of OpenFlow into native networks using path isolation. Note that just because there may be logical partitioning (VLANs/virtual contexts/logical-Switches) of the hardware forwarding tables, the OpenFlow flow table(s) and the native Ethernet forwarding engines are both sharing the same silicon.
OpenFlow is a flexible mechanism. Once traffic is ingested and classified, it can be forwarded on top of the native network using various encapsulations methods such as VXLAN, GRE, MPLS LSPs (Label Switch Paths); via VLANs; or just routed through the native Interior Gateway Protocol (IGP) paths.
SDN islands need to be integrated into the native network, whether to have a default drain from these islands into the native network or to stitch disparate SDN islands together. Most SDN products have some form of a gateway device to facilitate exits from the OpenFlow forwarding domain. Vendors also support OpenFlow pipeline interactions between the OpenFlow forwarding pipeline and the normal L2/L3 forwarding that exists today.
I’ll look at two integration strategies to bring OpenFlow into a native network. The first, SDN gateways, integrates OpenFlow traffic into the native network using the “normal” port as defined by the OpenFlow specification. The second integrates the OpenFlow forwarding pipeline into the normal Ethernet forwarding pipeline.
Two Example SDN Integrations
1.SDN gateway: The OpenFlow and native forwarding pipelines can be logically isolated from one another. Early vendor implementations are separated based on VLAN ID along with isolation through context and logical switches. It can be as simple as a routed interface on the native network that acts a default gateway to drain L3 lookups. The SDN gateways could be the same interface(s) that advertise the network prefix into the IGP and function as a default gateway. They can be used with protocols such as Virtual Router Redundancy Protocol (VRRP) for high availability.
2.Hybrid pipeline: Some vendors also support a blend of OpenFlow and native pipelines. OpenFlow “normal” is a reserved port in the specification that can be used as an output interface as a result of a match + action operation in the 1.0 version of OpenFlow.
OpenFlow Gateway Native Integration
The OFP_Normal configuration will send the packet in the OF pipeline to the native switching pipeline for packet forwarding. OFP_Normal is only used as a default forwarding mechanism in this case, very much like default routes in routing. The proactive rules placed in a higher priority in the following illustration would be matched prior to the normal L2/L3 pipeline. Priorities allow for application flow rules such as custom forwarding, security use cases, network taps, or any or function that can be performed using L1-L4 headers.
Proactive Rules
Both solutions have their pros and cons. OFP_Normal allows for a little bit more forwarding flexibility because the provider edge and L3 edge contain full visibility of the network’s topology, while using SDN gateways to separate the topologies over physical links.
SDN gateways have a much more traditional look and feel. They have a flat SDN OpenFlow network with a default gateway for client traffic by either controller proxy or direct reachability by the client host. Note that both approaches suffer from lack of adoption and maturity, which adds risk to any early production SDN.
Performance concerns are an issue with hybrid deployments. One reason is that TCAM operates at fairly slow speeds. Going from 1Gb to 10Gb line rates forces some vendors to do parallel TCAM lookups, thus reducing flow table rule capacity. Another performance concern is reactive flow policy, in which packets have to be sent to the controller for forwarding instructions, which can add latency. In a subsequent tutorial to be posted, I’ll show how flow rules can be preinstalled, eliminating the need for the OpenFlow switch to send a packet in to the controller for instructions.
SDN does not represent the end of networking as we know it. More than ever, talented operators, engineers and architects will be required to shape the future of networking. While OpenFlow is just one piece of the SDN puzzle, it is one of the few paths with momentum that may decouple monolithic network elements with a forwarding abstraction between OS and hardware. There’s considerable vendor support for OpenFlow and a variety of open source projects around OpenFlow-based controllers, which means now is a good time to start experimenting with the protocol and SDN.
Brent Salisbury, CCIE#11972, is a network architect at a state university. You can follow him on Twitter at @networkstatic.
You May Also Like