Docker background
Docker is a platform for packaging applications into Linux "containers" (hence the shipping metaphor), and then "ship" them around and running them on any system set up with Docker. The underlying technology (ie, Linux containers) has been around for some time, but Docker seems to have the mindshare as a more turn key solution (for instance it came up several times at Kiwi Pycon 2014). Docker's FAQ has a reasonable summary of what Docker adds to Linux Containers. There is also a Docker Book (US$10 for one eBook format; US$15 for multiple eBook formats), recently released by James Turnbull, which comes highly recommended.
Since I had seen several talks enthusiastic about Docker (eg, Shipping Ruby Apps with Docker from RedDotRubyConf), I wanted to try it out. However it is a Linux based technology, and I have OS X as my desktop, some additional steps are required to get a useful environment.
The default solution for Docker on OS X is boot2docker (on GitHub), which bundles VirtualBox and a minimal Docker-compatible Linux environment (based on Tiny Core Linux). However, I already have VMWare Fusion installed so I did not want to also try to run VirtualBox along side -- and VMWare Fusion apparently performs somewhat better. While Boot2Docker's ISO image will apparently run on other (virtualisation) platforms, it does not include, eg, the VMWare tools. It might be possible to rebuild boot2docker's ISO image including other things -- like VMWare Fusion's tools -- but that (a) requires a running Docker to do so, and (b) seems like a lot of work just to try something out. (Atomic Host is another small Linux distro aimed at hosting Docker -- not quite as small as Tiny Core Linux, but still fairly small. They recommend a separate partition for your Docker containers.)
There is an unofficial alternative solution called Docker-OSX, which uses Vagrant to run a VM that can host Docker. (That was the solution recommended at Kiwi PyCon 2014.) By default Vagrant for OS X also uses VirtualBox. But Vagrant does have VMWare Fusion integration as a paid addition. Howver the Vagrant/VMWare Fusion integration is US$80 per seat (and apparently per platform -- VMWare Fusion versus VMWare Workstation), which seems excessive when VMWare Fusion Standard is about US$60, and the Vagrant VMWare Fusion integration does not include the VMWare Fusion license (the minimum cost for VMWare Fusion and the Vagrant integration is about US$130; also there is no trial version). However if money is no object it does offer a way to use Vagrant to make a boot2docker under VMWare Fusion (using mitchellh's vagrant boot2docker config, which reprocesses boot2docker's ISO to be usable by Vagrant -- and does seem to have options for some VMWare specific config). If the Vagrant/VMWare Fusion integration had been, say US$20, I probaby would have just gone with that option for convenience -- but at US$80 without any way to judge whether it is worth the money, it is more than high enough to have me looking for other solutions.
Since I just wanted something to experiment with, and did not care about excessively minimising resources required (optimise for "time to start using" rather than minimal resources :-) ), I decided to use what I already had to hand -- VMWare Fusion, an Ubuntu Linux 14.04 LTS 64-bit Server ISO image, and a bunch of Linux experience -- and see how far I could get. Docker basically needs a relatively recent Linux kernel (3.8+) for the container support, and then some wrapper tools of its own. There is a reasonable guide to using Docker with Ubuntu Linux 14.04 on VMware Fusion from a team doing Go development, which gave me some hints to get started. I also looked at the Docker guide to installing Docker on Ubuntu Linux for additional tips. (If you want to install Docker on Debian 7 ("Wheezy"), you will have to update the kernel; but it is packaged for Debian Testing/Debian Unstable -- both of which have recent enough Linux kernels -- and those packages are Docker 1.2.)
Ubuntu Linux 14.04 LTS with Docker 1.2 VM installation
Install minimal Ubuntu Linux 14.04 LTS 64-bit Server, from the ISO image, under VMWare Fusion. It is probably a good idea to manually allocate some additional RAM to the VM (it is going to run more things inside the VM than normal), fix the MAC address for the ethernet (so as to allow assigning a persistent IP address), and install OpenSSH-Server when prompted during the install (so the remaining steps can be done via ssh rather than on an emulated console). I called my VM "docker", and just accepted the default disk partitioning.
Log in on "docker" VM's console, and find out the IP address. Make sure you can ssh into that IP from your OS X environment (and, eg, install your ssh keys to avoid typing passwords frequently).
Patch (and reboot) your Ubuntu Linux 14.04 LTS install. Then log back in on the console and install the VMWare tools:
sudo apt-get install build-essential
Then use the VMWare Fusion menus to make the VMWare Tools source avilable to the VM, ie: Virtual Machine -> Install VMWare Tools. Mount and build those tools:
sudo mount /dev/cdrom /media/cdrom sudo mkdir /usr/local/src/vmware-tools sudo chown ${USER}:${USER} /usr/local/src/vmware-tools cd /usr/local/src/vmware-tools tar -xzf /media/cdrom/VMwareTools* cd vmware-tools-distrib sudo ./vmware-install.pl
I choose to install the tools in
/usr/local/bin
, etc, to keep them away from the OS packaged files.For later convenience (ie, making OS X directories available into Docker containers) do choose to enable the "VMware Host-Guest Filesystem" feature (which now defaults to enabled). (Beware that wants to create
/mnt/hgfs
, which will fail if, eg, you have mounted the CDROM on/mnt
-- hence the use of/media/cdrom
above, even though it is more typing; yes I found this out the hard way.) I also enabled the "VMware automatic kernel modules" feature, so that modules will be built for newer kernsl as they are installed, and thevmblock
feature for dragging/copying files between the host and guest in case it was useful (which mounted on/run/vmblock-fuse
).From here, Docker 1.0 can be installed directly from the Ubuntu Linux 14.04 LTS "Universe" repository:
apt-get install docker.io # Older Docker 1.0.1 only
(note the package name --
docker.io
-- as there is an older Debian/Ubuntu packaged WindowMaker dock app that was packaged asdocker
!).However the latest release is Docker 1.2 (so with something this bleeding edge it is worth using the upstream (ie, Docker) package repository.
To install Docker 1.2 from the Docker repositories (see also simple shell script:
sudo apt-get install apt-transport-https # Probably installed already sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9 echo "deb https://get.docker.io/ubuntu docker main" | sudo tee /etc/apt/sources.list.d/docker.list sudo apt-get update sudo apt-get install lxc-docker
Which will install
lxc-docker
andlxc-docker-1.2.0
(at present), plus some dependencies. Providing theapt-key
was added there should not be any warnings/errors.lxc-docker
is just a metapackage created to depend on the latest version; the files are in thelxc-docker-1.2.0
package.Ideally one would verify the fingerprint of the GPG key used to sign the Docker releases, prior to adding it, but there does not seem to be an obvious way to do that, beyond looking on the Ubuntu Keyserver, and checking the fingerprint appears in various guides. (For instance it does not obviously seem to have been signed by any other keys...)
Note that there is also a Docker Ubuntu PPA (from the "Docker Ubuntu Maintainers"), which builds later versions of the
docker.io
package; this Ask Ubuntu thread explains the difference: thedocker.io
package is an Ubuntu one, and thelxc-docker
is a Docker upstream one.
Ubuntu Linux 14.04 LTS with Docker 1.7+
ETA, 2016-02-19: Mid-2015, Docker changed to a New APT repository, which is needed for post-Docker 1.7 (current version is Docker 1.10). There is a new set of APT repositories, with one per Debian version and Ubuntu version; for Ubuntu 14.04 LTS it is:
deb https://apt.dockerproject.org/repo ubuntu-trusty main
(which needs to go in /etc/apt/sources.list.d/docker.list
).
That has a new verification key (F76221572C52609D
), for which
Docker suggest:
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
which is slightly safer to run due to Docker using HTTPS for their blog and documentation sites -- but it does just automatically trust everything signed with that key without further authentication.... (Other people think this is not very safe, but between HTTPS and specifying the full key fingerprint most of the risks are mitigated, apart from someone replacing the content on Docker's own servers or gaining control of the key.)
After importing the new key, and changing
/etc/apt/sources.list.d/docker.list
, changing to the new
packages requires:
apt-get update
apt-get install docker-engine
to switch from the old lxc-docker
package names, to the new docker-engine
package names. Installing docker-engine
will result in removing,
but not purging, the old lxc-docker
packages. To tidy up, it is also
necessary to:
sudo apt-get purge 'lxc-docker*'
to remove the remnants of the old packages. (Since Ubuntu 14.04 LTS
does not use systemd
-- it uses Upstart -- the migration of settings
from /etc/default/docker
to the systemd configuration
location is not needed;
but make sure that your changes in /etc/default/docker
are retained.)
Using Docker from within Ubuntu Linux 14.04 LTS VM
From within the ubuntu Linux 14.04 LTS VM, docker can be used just like any other Linux install that includes Docker, which makes it easy to try simple Docker examples. The Docker User Guide has lots of good information on making your own containers. But there are also lots of pre-built containers.
For instance there are "Semi-official" Debian images, which can be used as a base for Debian-based VMs -- but they're not just a straight debootstrap; from context they seem to be built from docker-brew-debian.
There are also Ubuntu images which are also usable as a base for Ubuntu-based VMs, and install basically what is in Ubuntu minimal (they seem to be built from docker-brew-ubuntu-core). Both seem to be based on docker-brew, and Stackbrew (which seems to be a web-based CI platform for building base images; there are lots of stackbrew Docker repos).
A tiny way to start experimenting is with a BusyBox environment, which is about 10MB in size, eg:
sudo docker run busybox du -sh
will download the latest busybox Docker repository, and cache it
for later, so a second run does not need to download it. By default
this caching happens under
/var/lib/docker/
but you can also override this in /etc/default/docker
(and restart
the Docker daemon process if you update that).
We can inspect how the busybox
docker container is set up with:
sudo docker inspect busybox
amongst other things this gives us a GUID for the Image used, which
we can then find under /var/lib/docker/
:
sudo find /var/lib/docker -name $(sudo docker inspect busybox |
grep Image | cut -f 4 -d '"' | uniq)
(there are two references to the Image GUID, but they're both the same).
The /var/lib/docker/graph/${IMAGEID}
location found contains the downloaded
image, and the aufs
ones are used in union mounting those file systems.
By default Docker images are "stateless"; each time you start it up, you get a new one. (And there are some reasonable arguments that if you do not want that, maybe you should be using LXC directly rather than "mis"-using Docker.) But you can also save a container you just ran as a new base image -- which is probably best done when you've carefully set something up with that intention, rather than just to carry along whatever craft you got to that point. (Ie, think of them as deliberately taken "snapshots".)
Obviously not eveything can be completely stateless and still useful.
There are a few solutions to this problem. The "web" approach is
to push the state outside the application servers (eg, onto separate
database servers) which perhaps run in a more traditional manner.
Another approach is to give the Docker container a
"volume" from
the host OS which it can use for more persistent data -- providing
each Docker instance that is started is started with that same
volume mounted, then you have achieved persistent data storage. This
can be done with the -v
option, or the VOLUME
keyword in a Docker
file.
For instance:
sudo mkdir /data # Host VM
sudo mkdir /data/busybox # Host VM
sudo touch /data/busybox/FROM-HOST-OS # Host VM
sudo docker run -v /data/busybox:/data busybox ls /data
will let you see the file created in the Host VM. And:
sudo docker run -v /data/busybox:/data busybox touch /data/FROM-DOCKER
ls /data/busybox # Host VM
will let you create a file inside the Docker container that is visible outside. This has obvious benefits for maintaining state across runs.
A recommended pattern to allow using the same volume(s) in multiple containers, is to create a data volume container, which is just a container that exists solely for the purpose of being able to inherit its data volumes. (Some useful "under the hood" details.)
If we run something in the Docker container that stays running, then
we can use docker ps
to inspect it. In the case of the BusyBox Docker
it does not seem to have a useful interactive shell, but we can make it
sleep for a while. For instance, in one window:
sudo docker run -v /data/busybox:/data busybox sleep 60
and then in another window:
sudo docker ps
Obviously using any other more fully-featured Docker container would give us plenty of additional options. But at this point I just wanted to be sure Docker was working properly within the Ubuntu Linux 14.04 LTS VM, before carrying on.
Beyond Docker as root, within the Ubuntu Linux VM
Being able to run Docker within the VM (as described above) is
useful but not exactly seamless for interactive development --
especially the need to run all commands via sudo
.
Instructing Docker as a normal user within the Ubuntu Linux VM
By default the docker service (running as root) listens for
instructions on /var/run/docker.sock
. That is readable/writable
by root and the docker
group. If you care only about controlling
docker as a normal user within the Linux VM, you can add your
users account (and other trusted user accounts) into the docker
group, eg:
sudo usermod -aG docker ${USER}
(You will need to log out, then log in again, so that your shell process
picks up the new docker
group membership; use id
to verify what your
shell process has, and id ${USER}
to verify what it could have if you
logged in again.)
After logging in again, you can run docker commands without "sudo", eg:
docker run -v /data/busybox:/data busybox touch /data/asme
ls /data/busybox # On host VM
But note that within the container the command are still run as
root (since the docker
command asks the docker service to execute
the commands on its behalf, via that /var/run/docker.sock
socket)
-- the group membership does not change how docker runs things,
just expands who has permission to ask the docker service to do things.
If your host development environment is Linux, that is probably sufficient to make your life easy. If your host development environment is not Linux, and you are using the VM approach described above, you may wish to relax the security even further.
Instructing Docker from outside the Ubuntu Linux VM
For an isolated development environment it can be helpful to expose Docker's service to other hosts via TCP. Most likely this would be a terrible idea on a shared system or network, especially in anything production/external facing (something like Server Side Request Forgery (SSRF) could potentially let anyone able to reach one of your containers control everything managed by Docker, inside and outside that container, from outside your network; Mike Haworth's talk at Kiwi Pycon 2014 included a good example of SSRF -- slides and video). (If you insist on exposing the Remote API via TCP in production, then at least consider secure the Docker Remote API with TLS client certificate validation. And while you are there you should consider other Docker security issues -- eg, root inside a Docker container is still reasonably root-like.)
For the situation described in this blog post (Docker in a VMWare Fusion VM on an OS X development system), it is helpful to follow the approach used by boot2docker, which tells the docker service to also listen on TCP port 2375 -- for connections from anywhere. (TCP/2375 is Docker's well known port, reserved with IANA.)
This is only somewhat safe if network restrictions limit who can reach that port (eg, the VM's network interface is only reachable from your host OS X) and you trust all users on that host system. But that may be an acceptable trade off for your development laptop, used only by you, and only behind a good firewall.
To do that on Ubuntu Linux we need to edit /etc/default/docker
and
restart the service:
sudo perl -i.bak -ne 'if (/^#DOCKER_OPTS/) {
print; print "#\n# Expose Docker to the world (development only!)\n";
print qq(DOCKER_OPTS="-H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375"\n);
} else { print; }' /etc/default/docker
grep "^DOCKER_OPTS" /etc/default/docker # Check DOCKER_OPTS
sudo service docker restart
Then check that docker is running, and listening for both Unix and TCP sockets:
ps ax | grep '[d]ocker' # Check still running
sudo lsof | grep docker.sock | head -1 # Listening for unix socket
netstat -na | grep '[2]375' # Check listening for TCP
docker -H unix:///var/run/docker.sock ps # Docker via unix socket
docker -H tcp://127.0.0.1:2375 ps # Docker via TCP socket
sudo rm /etc/default/docker.bak # If docker is running
(If that does not work for some reason check /var/log/upstart/docker.log
for the log upstart creates for the docker service; hopefully that will
give a hint to syntax errors. In particular beware that docker does not
like a trailing /
on the tcp://
URI!)
At this point you should be able to connect from your OS X host
machine to TCP/2375 on your Ubuntu Linux VM (eg, test with telnet
from OS X). To make that more useful put the IP address assigned
to your Ubuntu Linux VM into your OS X host file (eg, as docker
)
so you can easily access it as "docker", eg telnet docker 2375
.
This effectively replicates what "boot2docker" does, but in a much less compact, and slightly less convenient form -- but using VMWare Fusion rather than VirtualBox. (It is less convenient because you need to install the VM by hand, and start/stop it by hand.)
Controlling Docker from OS X
The Docker Remote API is basically a REST API, usually returning JSON, but with some special cases. So it could be mostly managed with any HTTP/REST client. But it is most conveniently used via docker Remote API client.
If you are using Homebrew, then there is already
a "brew" for the docker client, which can be installed (brew install
docker
; eg quick guide to Docker via
Homebrew). But
I have been using MacPorts for
years,
did not want both MacPorts and Homebrew installed and competing
for which runs on what command (that seems like a recipe for
duplication and confusion -- especially since HomeBrew seems to
put things in
/usr/local
potentially conflicting with my own files). Fortunately HomeBrew
Formulas
are basically short Ruby snippets, and can be viewed on
GitHub
(although not all of them easily, as there are thousands in that
directory, so GitHub won't list them all...):
The docker Formula shows that it is just doing a limited build of the Official docker client, from a branch of the GitHub git repository.
The Official docker client on GitHub is written in the Go Language (from Google. So building the client requires a go environment; both MacPorts and HomeBrew have the ability to install a suitable environment. With MacPorts:
sudo port selfupdate
sudo port install go
Then we can clone the docker GitHub repository and build from the client from that that (we're explictly building verison 1.2.0 to match what was installed in the VM):
sudo mkdir -p /usr/local/package
sudo chown ${USER} /usr/local/package
cd /usr/local/package
git clone https://github.com/docker/docker.git --branch v1.2.0
cd docker
export DOCKER_CROSSPLATFORMS=1
export AUTO_GOPATH=1
export DOCKER_CLIENTONLY=1
hack/make.sh dynbinary
ls -l /usr/local/package/docker/bundles/1.2.0/dynbinary/docker-1.2.0
Which, as might be obvious, is something of a hack -- but AFAICT is
the hack that HomeBrew is using, since there is not really obviously
another way to just build the client -- outside of docker. (The build
environment is set up to use docker to create an evironment to build
docker -- which is wonderfully recursive, but not very helpful on a
platform that cannot run docker directly.... You get a warning that
the hack/make.sh
is not running in a Docker container as a result, but
it does seem to work.)
Using that client is best done via a small shell script which sets
the DOCKER_HOST
environment variable to point at your docker VM,
eg /usr/local/bin/docker
containing:
#! /bin/sh
# Run docker client against docker inside docker VM
#
DOCKER_BINARY='/usr/local/package/docker/bundles/1.2.0/dynbinary/docker-1.2.0'
DOCKER_HOST='tcp://docker:2375'
export DOCKER_HOST
exec "${DOCKER_BINARY}" "$@"
Assuming you have created the hosts file entry for your docker VM (mentioned above) with the right IP, and the VM is running, and the docker service in that VM is listening on TCP/2375, then you should be able to run docker commands from your OS X host:
docker info
docker run -v /data/busybox:/data busybox du -sm
docker run -v /data/busybox:/data busybox touch /data/from-osx
and if you check /data/busybox
within the VM then you should see the
file created.
The remaining challenges are ones that are common to having your commands nested one (Ubuntu Linux VM) or two (Docker Container) levels deep -- things like referring to directories that are not in common, and port forwarding.
ETA, 2016-02-19: Docker 1.10 client can be built with:
cd /usr/local/package/docker/
git checkout master
git pull
git remote prune origin
git pull
git branch -t v1.10 origin/release/v1.10
git checkout v1.10
export DOCKER_CROSSPLATFORMS=1
export AUTO_GOPATH=1
export DOCKER_CLIENTONLY=1
hack/make.sh dynbinary
which enables using Docker remotely from OS X into the docker
VM; just
edit the DOCKER_BINARY
config item in /usr/local/bin/docker
to say:
DOCKER_BINARY=/usr/local/package/docker/bundles/1.10.1/dynbinary/docker-1.10.1
Sharing folders from OS X to Linux, with VMWare Fusion
VMWare Fusion includes the ability to share folders from the host OS to the guest. To enable this:
Create a folder to share, eg:
mkdir -p /vm/shared/docker/data
Make sure the VM is powered off
Go to the Sharing section of VM's Settings (Virtual Machine -> Settings... -> Sharing)
Turn on Shared Folders, if not already enabled
Add the folder to share with the "+" bottom at the bottom left
Close the settings, and boot up the VM again
In theory the files then appear within /mnt/hgfs
in the Linux VM, but
you may find you need to rebuild the VMWare Tools again (instructions above)
before this works. (A sign that this is required is a kernel log message
saying vmhgfs: module verification failed
.)
Once working:
touch /vm/shared/docker/data/from-osx
ssh docker 'ls -l /mnt/hgfs/data/'
And:
ssh docker 'touch /mnt/hgfs/data/from-linux'
ls -l /vm/shared/docker/data/
should work. Beware that the /mnt/hgfs
is a magic mount, that does
not exactly respect unix file permissions, and the uid/gid exposed within
Linux will be the one from OS X (which does not match the Ubuntu defualts).
So it is best used for directories, and data, where access restrictions are
not a major concern -- on an isolated development system.
(Note that you cannot share anything under the virtual machine's
directory on OS X, eg, under /vm/docker.vmwarevm
; the sharing is
intended for data directories.)
Other options include:
Using the sshfs FUSE file system to mount a directory from the host OS X environment (there is also an OS X FUSE, including sshfs to potentially go the other way, but I have not used that; people apparently made it work with MacFusion if OSX FUSE was run in MacFUSE compatibility mode).
Run a NFS Server or CIFS server inside Docker, and mount that from the OS X side (eg as shown in [slides from "Be a happier developer with Docker: Tricks of the trade", which is filled with useful tips. The general idea here is to share a folder inside the Ubuntu Linux VM with (a) a NFS server under Docker (and thus the OS X host) and (b) other VMs.
Mapping ports through to OS X
Docker maps ports out of the container by using iptables
, such as
using NAT. These mapped ports default to being available on the
Ubuntu Linux VM's host IP. Eg in one window:
docker run -v /data/busybox:/data -p 8080:80 busybox nc -l -p 80
and in another:
echo "Hello, World!" | nc docker 8080
you will reach the Docker VM, which will exit having read the text that we passed in.
This appears to be implmented using Linux's IP Tables NAT
support,
so as a hack you can map additional ports with live iptables
rules. (That is not recommended though, as it is not reproducible; the
recommended approach is to save the container, and then restart it with
additional options. But live iptables
rules can be handy for some
debugging.)
Note that if you do not specify a port to map, then docker will
map some arbitrary port (typically in 49xxx) through to the port
in the VM. Which is really only useful if you have something to
find that port again and hook something up to it (eg, Docker
links to automatically
link various services in various containers together). docker ps
will show those mapped ports, if you do want to do it by hand.
Mapping ports is, eg, useful for running a web application in Docker, eg the Docker example webapp (about 100MB of various layers):
docker run -d -P training/webapp python app.py
docker ps -l
then we can send our webrowser to http://docker:PORT/
, where the
PORT is the dynamically generated (or statically assigned with -p
PUBLIC:PRIVATE
) public mapping port (ie, at the left side of the
port maping reporting), and http://docker
refers to our host entry
for the containing VM (described above).
It is also possible to ask docker what port should be used to reach a given port in a given container with, eg,
docker port ${CONTAINER_NAME} ${PORT_IN_CONTAINER}
where the ${CONTAINER_NAME}
is either manually assigned
(--name=MYNAME
) or dynamically generated (creative adjective/noun
pairs!), and ${PORT_IN_CONTAINER}
is the containers idea of where
connections should arrive. (Also note that this container is run
in the background -- ie, detached -- with -d
-- which means on
startup it returns a GUID to reference it, that can be saved and
used later to reference it in place of the name, eg docker port
${GUID} ${PORT_IN_CONTAINER}
. Automated scripting would probably
benefit from that.)
docker logs ${CONTAINER_NAME}
and possibly more useful, the tail -f
variant on that:
docker logs -f ${CONTAINER_NAME}
can be used to see logs from the tool (I think these are what gets logged to stdout, but that's common for a development run).
(containers started in the background, ie, detached, need to be explicitly
stopped, eg: docker stop ${CONTAINER_NAME}
)
Other than needing to replace the 0.0.0.0
in the public facing port with
docker
(rather than 127.0.0.1
), handling ports does not seem particular
more complicated than using docker "natively".
Next steps
Ideally I want to be able to use Docker as a Vagrant
Provider (so that
Vagrant builds things to run inside Docker, rather than requiring
a separate VM enivonment). It is not especially obvious whether
it is possible to do this using the vagrantup
command from OS X
directly (and having that use the docker
command from OS X
directly), or whether Vagrant will insist on knowing better and
starting its own Docker
environment
(thus leading us back down the Vagrant/VirtualBox or Vagrant/VMWare
Fusion paid integration rabbit hole again). Or even if using Vagrant
and Docker together offers any specific advantages (if you're not
planning on using Vagrant to target, eg, a cloud platform later).
(See also Vagrant dockerbox,
described in this blog
post.)
In particular I started investigating with the idea of being able to use Catalyst IT's "basil" to build Python web framework development environments. Which are built with Vagrant. But it looks like they want to create Vagrant VMs, and customise them with Puppet, rather than use Docker -- so some transliteration is probably required. Possibly completely replacing the use of Vagrant with Docker. (They also appear to have based their images on Ubuntu Linux 12.04 LTS -- aka Precise -- which doesn't seem a perfect place to start.) It may be that there are better pre-built Docker containers to use already.
ETA, 2014-09-20: In order to pre-seed my Docker images (so that the bases for various other things I might want to try later are present), I've done:
docker run -i -t ubuntu /bin/bash # == ubuntu:latest
docker run -i -t ubuntu:14.04 /bin/bash
docker run -i -t ubuntu:12.04 /bin/bash
docker run -i -t debian:wheezy /bin/bash
docker run -i -t debian:7.6 /bin/bash
docker run -i -t debian:stable /bin/bash
docker run -i -t debian:jessie /bin/bash
docker run -i -t debian:testing /bin/bash
baseimage-docker is the Phusion baseimage, which is Ubuntu Linux 14.04 with various core services you would expect in an Ubuntu system running (eg, logging, cron, etc). They appear to release updated baseimages moderately often. (Source on GitHub, with detailed README.)
docker run -i -t phusion/baseimage /bin/bash # == ...:latest
docker run -i -t phusion/baseimage:0.9.13 /bin/bash
Also fig (written in Python; source on GitHub) is suggested by some as a "Vagrant for Docker" -- it appears to add a layer of group container startup to Docker, and includes an example Flask environment, and an example Django environment -- roughly the things I wanted Catalyst IT's basil to be able to try out.
ETA, 2015-06-27: One issue encountered with this set up, is
that certain CentOS 7 packages (such as rsh
) cannot be installed
onto Docker running with the AUFS
storage driver (AUFS is the the default storage driver on
Ubuntu).
Package installs fail due to permission problems around setting (I
think) SELinux-related file capabilities (cap_set_file
fails; see related
blog
post for
other impacts, and a bug fix merged
upstream).
With Docker 1.6.x it was possible to avoid this problem by switching
the storage driver to
devicemapper
. This is done by:
sudo service docker stop
# Add "-s devicemapper" to DOCKER_OPTS in /etc/default/docker
sudo service docker start
But note that because AUFS is a file-based overlay system and devicemapper is a block based overlay system, you will have to recreate all of your container layers from scratch. (See the Not So Deep Dive Into Docker Storage presentation for a good overview of Docker storage drivers.)
With Docker 1.7.0, that work around stops working because it is
not possible to use the devicemapper
storage driver on Ubuntu
14.04 with the Docker-supplied static binary in the Docker-supplied
apt repository.
Docker will literally not start at all if (a) the storage driver
is set to devicemapper
or (b) there is a /var/lib/docker/devicemapper
directory. (ldd /usr/bin/docker
confirms that the Docker supplied
binary is statically linked.)
For Docker 1.7.0 there are two choices:
Manaully recompile Docker as a dynamic binary which then lets it safely talk to
udev
/devicemapper
again (instructions there are for 1.6.2, but in theory a similar approach should work for 1.7.0)Revert back to the
AUFS
driver, and upgrade to a newer Linux kernel. I found this by accident, as I was trying to upgrade to a 3.18+ kernel so I could try theoverlay
file system instead, but when I was running on the later Linux kernel Docker 1.7.0 with AUFS just worked fine to allow installing the problematic CentOS 7 packages.
For Ubuntu 14.04 LTS the easiest way to get a more modern kernel is to
install one of the backported kernel packages from a later Ubuntu
release. Ubuntu 14.10 (Utopic) comes with a Linux 3.16 kernel, and
Ubuntu 15.04 (Vivid) comes with a Linux 3.19 kernel. I picked the
Ubuntu 15.04 (Vivid) backported 3.19 kernel, because it was higher than
3.18 and thus also included the overlay
file system.
Installation is trivial:
sudo apt-get install linux-image-generic-lts-vivid
Remember to change docker back to the default storage driver (AUFS)
by removing "-s devicemapper
" from DOCKER_OPTS
in
/etc/default/docker
.
And also remember to hide /var/lib/docker/devicemapper
from Docker
so that it does not panic at the sight of devicemapper things:
sudo mv /var/lib/docker/devicemapper /var/lib/docker/devicemapper.disabled
Then reboot to activate the newer kernel, and docker should start properly on boot.
Note that if you created containers with the devicmapper
storage
driver enabled, you'll have to recreate those containers/layers
again with AUFS
-- for the same reason that you had to recreate
them when going from AUFS
to devicemapper
. In the case of my
testing environment only one container got affected by this AUFS
to devicemapper
to AUFS
adventure, so it was not a big deal to
just keep recreating it. But if you have a lot of containers you
might want to docker export ...
the containers/layers you care
about before changing storage drivers, and then import them again
afterwards. (See Switching Docker from aufs to
devicemapper`
for a guide to how exporting the important images and then loading
them again.)