The goal of this webinar is to give you experience of an SDN network based on Open vSwitches (OVS), managed both manually as well as by an OpenDaylight (ODL) controller. It discusses quality assessment methodologies and tools, which is necessary to use in order to understand how the network performs, as well as to deliver expected quality.

This text is a summary of the content in the webinar.

Tools and setup

The webinar utilized Netrounds on VMs to validate SDN based network topologies and infrastructure. For the presentation, all of this was done on a single desktop. Virtual Box was used as the hypervisor platform. Open vSwitch was used for Open Flow (OF) enabled switches. Mininet was used to quickly launch various custom network topologies. Open Daylight (ODL) was used as the Network Operating System (NOS) or SDN Controller to interact with different SDN topologies.

Since Netrounds is deployed as software, it can be easily integrated into an SDN environment in a lab, a development environment or in production. Netrounds Genalyzer probes can be deployed pre-installed on hardware or easily downloaded and installed on bare metal or as virtual machines.

initial-diagram

3 VMs were used. Two were used with Netrounds Probes installed. The 3rd VM had Mininet and ODL installed. Internal networks in VirtualBox were used to link the 2 Netrounds probes to the ODL/Mininet VM. The Netrounds probes also had Internet facing interfaces (using Virtual Box bridged ports). This allows probes to be configured, thresholds to be set, and tests to be launched through the Netrounds cloud portal. All that is required is an Internet connection and a portal account.

Test process

The testing was broken out into 3 stages.

stage1-cli

First a single OVS instance was configured by CLI using ovs-vsctl commands to interact with the OVSDB (OVS Database).

Launch an OVS instance named S1

ovs-vsctl add-br S1

Connect probes by adding VMs eth1 and eth2 ports to S1

ovs-vsctl add-port S1 eth1ovs-vsctl add-port S1 eth2

Verify

ovs-vsctl show

Examine the ovsdb particularly the Interface table

ovs-vsctl list Interface

Remotely this could be accomplished from an SDN controller using the OVSDB protocol. OpenFlow flow entries were added and removed using ovs-ofctl commands locally on the CLI. This could also be done remotely using the OpenFlow protocol. Validation of flow entries was done by launching stages of Netrounds tests across Open vSwitch. Netrounds allows for tests to be run in parallel, sequentially or a combination of both. TCP traffic, UDP with multiple DSCP values, and IPV6 for dual stack testing tests were launched. Traffic sending rates and custom thresholds for traffic (loss, delay, jitter) were set through the Netrounds cloud portal.

Port 80 traffic was blocked using a simple OF entry and verified in Netrounds.
ovs-ofctl add-flow S1 priority=100,tcp,tp_dst=80,actions=drop

cli-testing

Netrounds was then used to verify QoS with OVS. First, this was accomplished by applying an ingress policing rate and burst size to an OVS port. Then queues were configured with ovs-vsctl commands and a queuing policy with ovs-ofctl. Through OpenFlow flow entries, DSCP-46 traffic was sent to a guaranteed rate queue while all other traffic was sent to a best effort queue.

Ingress policing

ovs-vsctl set Interface eth2 ingress_policing_rate=500ovs-vsctl set Interface eth2 ingress_policing_burst=50

ingress-policing

preferential treatment for DSCP 46

ovs-vsctl set port eth1 qos=@newqos \-- --id=@newqos create qos type=linux-htbother-config:max-rate=2000000 queues=0=@q0,1=@q1 \-- --id=@q0 create queue \other-config:min-rate=0 \-- --id=@q1 create queue \other-config:min-rate=1000000

OpenFlow flow entries to send traffic types to specific ports:queues

ovs-ofctl add-flow S1 priority=1000,udp,nw_dst=10.0.0.100,nw_tos=184,actions=enqueue:1:1ovs-ofctl add-flow S1 priority=950,ip,nw_dst=10.0.0.100,actions=enqueue:1:0ovs-ofctl add-flow S1 priority=940,ipv6,ipv6_dst=2001::100,actions=enqueue:1:0ovs-ofctl add-flow S1 priority=900,actions=normal
Next Open vSwitch was connected to the ODL controller on the first release – Hydrogen.

stage2-ODL-app

Set controller

ovs-vsctl set-controller S1 tcp:127.0.0.1:6633
From the controller, the behavior or characteristics of the network can be configured using SDN applications. Highlighted was the demonstration application “Simple Forwarding” which builds /32 host paths through the network topology using OpenFlow. Netrounds validated that all traffic with the exception of IPV6 traffic was working. Further investigation showed the “Simple Forwarding” application was not adding IPV6 flows. This provided an opportunity to utilize the northbound RESTful API of ODL

stage3-NBAPI-app

Flows entries for IPV6 were added using the Chrome Postman REST Client

postman

Using a Mininet custom topology, a basic loop of 4 switches was launched.

ODL

Additionally artificial latency was added to links through the Mininet custom topology which uses Linux traffic control (tc). Traffic between S1 and S2 was given higher latency than the longer physical path from S1 => S4 => S3 => S2. Netrounds probes were connected as needed into the looped topology using simple ovs-vsctl add-port commands. By default, the simple forwarder application builds a path through the shorter physical path. Successfully sending traffic through a looped topology (validated by Netrounds) provided an opportunity to discuss just one simple example of the benefit of centralized processing from an SDN controller. Traffic was successfully sent without any traditional distributed loop prevention mechanism such as one of the various flavors of spanning tree protocol. This is easy to accomplish when a controller is topology-aware.

stp

stp-table

A failover test was conducted by simply shutting the short path between 2 OVS switches over which all traffic was flowing. This was accomplished from the Mininet shell. Through the Netrounds web portal, the impact of the failover was clearly observed. A brief spike in jitter and packet loss occurred before traffic failed over to the lower latency, longer physical hop path.

After restoring the topology an example of traffic engineering using OpenFlow flow entries was shown. In the looped topology UDP traffic with a DSCP value of 46 was sent through a lower latency path with no competing flows. All other flows were kept on their original path through the network. This was accomplished again through the northbound API of ODL. This demonstrated the power of OpenFlow to easily control traffic types and the paths they take through an OpenFlow enabled programmable network.