SDNCon happened this week in Wellington. In reality, it was less conference, and more of a sprint -- other than an introduction there were basically no presentations, and participants were expected to bring their own projects to work on. It appears to have been actually intended to be a Vandervecken sprint (as an email said shortly before SDNCon started, "these 3 days expect you to come along with ideas for how to extend the Mininet/Vandervecken framework as well as the ability to code the changes."). Unfortunately Vandervecken is developed Cathedral style, with the only public releases being a 1.3GB ISO image of a Live CD, with minimal documentation, no source release or source repository and not even any changelogs of what is different between each 1.3GB ISO file. It is definitely not the GitHub workflow of modern Open Source projects. Rather than spending several days trying to reverse engineer Vandervecken, I decided to actually implement something that would be of use to me -- a way to use Docker to test Ryu and Open vSwitch, so as to have more flexibility of controller and guest environment than Mininet, as well as potentially using something like fig to start the set of containers.
By default, Docker's
networking will
configure a docker0
traditional Linux bridge, with a RFC1918 IP
range (typically 172.17.42.1/16
if it is not already used), and
attach each container that is started to that bridge via a veth
link,
giving it a unique IP address in that range. (Chris Swan's talk
on Docker Networking
is a helpful short talk about how this works.) There are some other
options, including exposing the host's networking directly, and
configuring a container to share another container's network
connection.
For more complicated scenarios currently your best option is to tell docker not to set up the network, and set it up yourself separately. There are several third party scripts to handle setting up separate networking, including pipework which does support Open vSwitch, and ovswork.sh. Marek Goldmanm has a blog post describing connecting containers on multiple hosts via Open vSwitch, which was useful background information (see also Chris Swan's example of connecting multiple hosts using VXLan, which does not use Open vSwitch).
Open vSwitch on Ubuntu 14.04 LTS
Using Open vSwitch on a Docker host (eg, the one I set up myself under VMWare Fusion) is pretty simple, because sufficient Open vSwitch support has been included in the Linux kernel and distributions since before there was usable container support for Docker (ie, basically since the Linux 3.8 kernel). I used Ubuntu 14.04 LTS, which comes with the Linux 3.13 kernel, since I already had it handy. To easily use Open vSwitch, you need two tools:
ovs-vsctl
(packaged inopenvswitch-switch
in Ubuntu Linux)ovs-ofctl
(packaged inopenvswitch-common
in Ubuntu Linux)
so these tools can be installed with:
sudo apt-get install openvswitch-common openvswitch-switch
Having done that ovs-vsctl
can be used to set up an Open vSwitch bridge,
and ovs-ofctl
can be used to inspect OpenFlow information about the bridge.
Ryu under Docker
Running Ryu is fairly simple because
it effectively just exposes a single TCP port (historically TCP/6633,
the de facto OpenFlow port; IANA assigned TCP/6653 last year).
There is a semi-official Ryu
Dockerfile, but
inspecting the
Dockerfile reveals
that it basically installs by downloading and unzipping a master.zip
of unknown origin (and unknown version).
To allow more flexiblity I made my own simple Docker image, partly
following the package-based Ryu process I described
previously,
and partly using the pip install
approach. To maximise the amount
of Docker caching, I followed the two Dockerfiles
approach
of containerising Python applications. The first creates a base
image with the application installed and the second actually uses
it -- building on the cached base.
The first Dockerfile installs all the dependencies available as Ubuntu packages, then copies in a pip requirements for Ryu, and runs:
pip install -r /root/requirements.txt
to install the remaining dependencies, and Ryu itself. (In this case Ryu 3.12, since that was the last version I'd built.) The requirements list was taken from my existing installation with "pip freeze", and then filtering that for the known Ryu dependencies.
The sole catch with this approach of using pip
on top of a minimal
Ubuntu Docker image is that it has no compiler, so the python-msgpack
package has to fall back to using the less efficient Python
implementation rather than the more efficient C implementation.
For now that is okay, but eventually I'll probably take the packages
I built
previously
and make them part of the build process (ideally reproducing the
build process inside a Docker container).
That base Ryu 3.12 image can be built with:
cd docker/ryubase
docker build -t ryu312 .
(in this case the name chosen to reflect the Ryu version).
On top of that we can build a simple Ryu application image for the KiwiPycon example with a second Dockerfile, that builds on the first one and adds kiwipycon3.py into the container.
The application image can be built with:
cd docker/ryu-kiwipycon
docker build -t ryu-kiwipycon .
Once built, that container can be started with:
docker run -i -t -p 6633:6633 -p 6653:6653 ryu-kiwipycon
or to get into a shell in that container:
docker run -i -t -p 6633:6633 -p 6653:6653 ryu-kiwipycon /bin/bash
Example guest image
My Kiwi Pycon Ryu
example needed
dig
in order to trigger some behaviour of the controller. To facilitate
that, I built one more Docker image which was a tiny customisation of the
base Ubuntu 14.04 one, using a trivial Dockerfile, which can be built with:
cd docker/kiwipycon-guest
docker build -t kiwipycon-guest .
Connecting Docker container up to Open vSwitch
To facilitate plumbing a Docker container to an Open vSwitch, I wrote a short dockerovs shell script (inspired by pipework, and ovswork.sh but with a little more robustness and auto-setup -- eg it will automatically create the Open vSwitch bridge if it does not already exist).
Basic usage is:
GUEST_ID=$(docker run -i -t --net=none kiwipycon-guest /bin/bash)
./dockerovs kiwipycon "${GUEST_ID}" "172.31.1.1/24"
where the parameters to dockerovs
are the name of the Open vSwitch
(auto-created if it does not exist), the Docker Container ID (used to
auto-find the network namespace) and the IP address to be assigned
inside the container.
A "complete" KiwiPycon example can be brought up with a companion
helper script,
kiwipycon-example
,
which auto-starts the Open vSwitch, auto-starts two guests (with
172.31.1.1/24 and 172.31.1.2/24) connected to that Open vSwitch,
and then starts the Ryu controller with the example
application.
To run this:
./kiwipycon-example
(which actually needs to be run within the docker VM, rather than via the Docker command line, because Docker currently does not provide a hook for running custom networking setup on boot).
It will start the two guests (kiwipycon_h1
and kiwipycon_h2
) detached
in the background, and Ryu controller in the foreground (to detach
from it use ctrl-p ctrl-q).
Once all three containers are running (the Ryu application as well as
the two tests hosts -- check with docker ps
) it is possible to carry
out the same test as in the KiwiPycon example:
docker attach kiwipycon_h1
ping -c 5 -W 1 172.31.1.2 # Observe timeouts
dig @172.31.1.2 +time=1 +tries=1 +short xyzzy.example.com
ping -c 5 -W 1 172.31.1.2 # Observe it now works
From the docker host, it is possible to inspect the status of the Open vSwitch:
ewen@docker:~$ sudo ovs-vsctl list-br
kiwipycon
ewen@docker:~$ sudo ovs-vsctl get-controller kiwipycon
ptcp:6634
tcp:127.0.0.1:6633
ewen@docker:~$ sudo ovs-ofctl dump-ports kiwipycon
OFPST_PORT reply (xid=0x2): 3 ports
port 1: rx pkts=24, bytes=1982, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=11, bytes=798, drop=0, errs=0, coll=0
port 2: rx pkts=17, bytes=1306, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=9, bytes=658, drop=0, errs=0, coll=0
port LOCAL: rx pkts=8, bytes=648, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=20, bytes=1456, drop=0, errs=0, coll=0
ewen@docker:~$ sudo ovs-ofctl dump-flows kiwipycon
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=1160.309s, table=0, n_packets=6, n_bytes=252, idle_age=452, priority=30,arp actions=FLOOD
cookie=0x0, duration=1160.31s, table=0, n_packets=10, n_bytes=908, idle_age=618, priority=20,in_port=1 actions=drop
cookie=0x0, duration=1160.309s, table=0, n_packets=7, n_bytes=630, idle_age=453, priority=20,in_port=2 actions=FLOOD
cookie=0x0, duration=1160.309s, table=0, n_packets=1, n_bytes=88, idle_age=466, priority=30,udp,in_port=1 actions=CONTROLLER:65535
cookie=0x0, duration=466.28s, table=0, n_packets=7, n_bytes=574, idle_age=452, priority=40,dl_src=3a:1e:56:96:4f:fc actions=FLOOD
ewen@docker:~$
(Those flows come from after running the tests above.)
The only special case for this particular example is that Open vSwitch
appears to only allocate port 1 and port 2 on the bridge immediately
after it is first started up (eg, if the guest containers are restarted
they will get higher port numbers), and the Ryu Kiwi Pycon example has
a simplifying assumption that the two test hosts will be on port 1 and
port 2 respectively -- so the kiwipycon-example
script deletes the
bridge before each run.
Apart from that, and a bit of scripting effort, using Open vSwitch with Docker "just works". The only thing which would make it easier is if Docker had a way to run a "network setup" script automatically on the host immediately after guest startup, rather than only having built-in network defaults.
For clarity, in this example, the dockerovs
configures the
Open vSwitch to connect to a controller on 127.0.0.1:6633
,
and the kiwipycon-example
script runs the Ryu application with port 6633
forwarded from the host into the Ryu application container. This puts the
Open vSwitch (logically located in the host) under control of the Ryu
application in the container.
It is also possible to use Open vSwitch without a controller, in
NORMAL
mode -- ie acting like a traditional switch. But with the
Kiwi Pycon example, the key point of the setup is using Ryu to
customise how the Open vSwitch manages traffic; this is explained
in more detail in the Ryu Kiwi Pycon
presentation.