Docker Containers

This is the first post in the series of posts exploring the ins and outs of using docker w/ elastic beanstalk as part of a continuous deployment strategy. As usual, the code in this example is in github.

Docker is an excellent tool for isolating and scaling web services. Read the Docker Overview if you are unfamiliar with containerization concepts. The most important concept is that containers are ephemeral. They come and go as needed to perform tasks.

It is possible to deploy docker containers via elastic beanstalk but the documentation is a bit daunting so we'll work through it step by step.

Making Containers

You will need to install the Docker Toolbox on your development machine and launch the Docker Quickstart Terminal.

This puts you in a shell that has all the docker tools.

First we need to define the contents and running context of our webapp container using a Dockerfile. Here's some things to keep in mind when designing a container.

For our example we have a loopback web app that we want to containerize for deployment to elastic beanstalk. The Dockerfile defines the dependancies, copies needed files into the container and defines how the service within the container is started and what port to expose the service on. In this example we are using supervisord to start the app.


# install packages
FROM ubuntu  
RUN apt-get update  
RUN apt-get install -y supervisor  
RUN apt-get install -y curl  
RUN curl -sL | sudo -E bash -  
RUN apt-get install -y nodejs  
RUN apt-get install -y git  
RUN apt-get install -y imagemagick

# copy app into container
ADD assets /var/app/current/assets  
ADD client /var/app/current/client  
ADD common /var/app/current/common  
ADD docker-assets/webapp /var/app/current/docker-assets/webapp  
ADD server /var/app/current/server  
ADD tests /var/app/current/tests  
ADD working /var/app/current/working  
ADD gruntfile.js /var/app/current/gruntfile.js  
ADD package.json /var/app/current/package.json

# set up supervisord
RUN cd /var/app/current; cp docker-assets/webapp/supervisord.conf /etc/supervisor/conf.d/supervisord.conf

# To run npm install or not to run npm install, that is the question.
# in this case it is not needed so just copy the entire node_modules
# directory to the container so it exactly matches the development
# environment
ADD node_modules /var/app/current/node_modules

# If there were os specific npm modules we could run npm install
# but that comes with some thorny issues - you could get different
# versions of packages running inside the container than are
# running in your development environment. This is an issue to
# consider when controlling you continuous deployment strategy.
# Where possible I like npm to be a function of the development
# environment keeping deployment out of module version hell.
# RUN cd /var/app/current; npm install

# expose webapp port

WORKDIR "/var/app/current"

CMD ["/usr/bin/supervisord"]  

Run docker build -t webapp . to build the container. What that actually does is make an 'image' which the container will be based on. See docs

Power up the container shell and poke around:
docker run --name webapp --rm -p 3000:3000 -it --env-file ./localdev.env --entrypoint /bin/bash webapp

Note: --env-file ./localdev.env As usual, our web app needs some environment variables to run. More info on that in the keeping secrets post. In short - that file contains the keys the app needs to run.

ls / It looks like a ubuntu box (because that was specified in the first line of the Dockerfile) and all the files we copied to it are in /var/app/current

exit when you are done and the container will shut down.

note the --rm option - this makes the container ephemeral so that it is deleted after running. I find this the best option for development environments. If you omit that option the container will still exist with status 'Exited' docker ps -a lists all containers. Exited containers can be brought back from the dead by name in many docker commands. To clean up all exited containers use:
docker rm $(docker ps -q -f status=exited).

To start the webservice on port 3000 run:
docker run --name webapp -p 3000:3000 -dt --env-file ./localdev.env webapp

This starts the container and runs it in the background exposing the containers port as 3000. You can see what containers are running with docker ps. To see the details about the running container use docker inspect webapp

Note: Docker tools defines a virtual network so 'localhost' will not work to connect to the container. To find the ip address of the container use docker-machine ip

Now you can access the webservice:
curl $(docker-machine ip):3000

Get a shell on a running container:
docker exec -it webapp sh

Shut down the container when you are done:
docker stop webapp

But remember - it's still there:
docker ps -a

To clean it up:
docker rm webapp

Now we have a container for the web app. In the next post we can figure out how to get that into a running Elastic Beanstalk cluster...

Continued in Connecting Services w/Docker Containers

Photo: ittoqqortoormiit, Eastern Greenland (2007)
Document version 1.0