Straight to production with Docker, Ansible and CircleCI

Docker shakes up the way we use to put into production. In this article I’ll present
the main obstacles I encountered to set up the production workflow of a simple Node.js API called cinelocal.

Erratum: I am now using docker-machine instead of ansible. You can read in the comments why

Step 1: set up a development environment

Docker-compose is a tool for defining and running multi-container Docker applications. Cinelocal-api requires 3 services running in 3 containers:

Here is the corresponding docker-compose.yml defining the 3 services and their relations (read more about compose files):

  
# docker-compose.yml
data:
  image: busybox
  volumes:
    - /data

db:
  image: postgres:9.4
  volumes_from:
    - data
  ports:
    - "5432:5432"

api:
  image: node:wheezy
  working_dir: /app
  volumes:
    - .:/app
  links:
   - db
  ports:
    - "8000:8000"
  command: npm run watch
  

Notice the .:/app line in the API container that mounts the current folder as a container’s volume so when you edit a source file it will be detected inside the container.

The npm command of the API container is defined in the package.json file. It runs database migrations (if any) and starts nodemon which is a utility that monitors for any change in your source and automatically restarts your server.

package.json:


{
  "scripts": {
    "watch": "db-migrate up --config migrations/database.json && node ./node_modules/nodemon/bin/nodemon.js src/server.coffee"
  }
}

Now the API can be started using the command docker-compose up api (it might crash the first time because the node container does not wait for the postgres container to be ready. It will work the second time. This is a known compose issue).

Unfortunately using Docker adds a layer of complexity to the usual commands such as installing a new Node.js package or creating a new migration because it must be run in the container. So:

  • All your commands should be prefixed by docker-compose run --rm api
  • The edited files (package.json with npm install or migration files with db-migrate) will be owned by the docker user.

To bypass this complexity, you can use a Makefile that provides a set of commands.


# Makefile
whoami := $(shell whoami)

migration-create:
    docker-compose run --rm api \
    ./node_modules/db-migrate/bin/db-migrate create --config migrations/database.json $(name)\
     && sudo chown -R ${whoami}:${whoami} migrations

migration-up:
    docker-compose run --rm api ./node_modules/db-migrate/bin/db-migrate up --config migrations/database.json

migration-down:
    docker-compose run --rm api ./node_modules/db-migrate/bin/db-migrate down --config migrations/database.json

install:
  docker-compose run --rm api npm install

npm-install:
    docker-compose run --rm api \
    npm install --save $(package)\
    && sudo chown ${whoami}:${whoami} package.json

Now to install a package you can run: make npm-install package=lodash or to create a new migration: make migration-create name=add-movie-table.

Step 2: Provisioning a server

With Docker, whatever your stack is, the provisioning will be the same. You’ll have to install docker and optionally docker-compose, that’s it.

Ansible is a great tool to provision a server. You can compose a playbook with roles found on ansible galaxy.

To install docker and docker-compose on a server:


# devops/provisioning.yml
- name: cinelocal-api provisioning
  hosts: all
  sudo: true
  pre_tasks:
    - locale_gen: name=en_US.UTF-8 state=present
  roles:
    - angstwad.docker_ubuntu
    - franklinkim.docker-compose
  vars:
    docker_group_members:
      - ubuntu
    update_docker_package: true

Before running the playbook you need to install the roles:


ansible-galaxy install -r devops/requirements.yml -p devops/roles

with:


# devops/requirements.yml
- src: angstwad.docker_ubuntu
- src: franklinkim.docker-compose

I tested this provisioning with Ansible 2.0.2 on Ubuntu Server 14.04.


# Makefile
install:
  ansible-galaxy install -r devops/requirements.yml -p devops/roles

provisioning:
    ansible-playbook devops/provisioning.yml -i devops/hosts/production

Step 3: Package your app and deploy

Each time I deploy the API, I build a new Docker image that I push on Docker Hub (the GitHub of Docker images).

The construction of the API image is described in a Dockerfile:


FROM node:wheezy

# Create app directory
RUN mkdir -p /app
WORKDIR /app

# Install app dependencies
COPY package.json /app/
RUN npm install

# Bundle app source
COPY . /app

EXPOSE 8000
CMD [ "npm", "start" ]

To build and push the image on Docker Hub, I added these two tasks in the Makefile:


# Makefile
build:
    docker build -t nicgirault/cinelocal-api .

push: build
    docker push nicgirault/cinelocal-api

Now make push builds the image and pushes it on Docker Hub (after authentication).

In development environment I want to mount my code as a volume whereas it should not be the case in production. Using multiple Compose files enables you to customize a Compose application for different environments. In our case, we want to split the description of the api service in a common configuration and a environment specific configuration.


# docker-compose.yml (common configuration)
api:
  working_dir: /app
  links:
   - db
  ports:
    - "8000:8000"
  environment:
    DB_DATABASE: postgres
    DB_USERNAME: postgres

# docker-compose.dev.yml (development specific configuration)
api:
  image: node:wheezy
  volumes:
    - .:/app
  command: npm run watch

# docker-compose.prod.yml (production specific configuration)
api:
  image: nicgirault/cinelocal-api

To merge the specific configuration into the common configuration:

  
    docker-compose -f docker-compose.yml -f docker-compose.dev.yml up api
  

By default, Compose checks the presence of docker-compose.override.yml so I renamed docker-compose.dev.yml to docker-compose.override.yml.

Now I can deploy the API using 3 commands described in a simple Ansible playbook:


# devops/deploy.yml
- name: Cinelocal-api deployment
  hosts: all
  sudo: true
  vars:
    repository: https://github.com/nicgirault/cinelocal-api.git
    path: /home/ubuntu/www
    image: nicgirault/cinelocal-api
  tasks:
    - name: Pull github code
      git: repo={{ repository }}
           dest={{ path }}

    - name: Pull API container
      shell: docker pull {{ image }}

    - name: Start API container
      shell: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d api
      args:
        chdir: {{ path }}

In the Makefile:


deploy: push
    ansible-playbook -i devops/hosts/production devops/deploy.yml

make deploy builds the image, pushes it and runs the playbook.

Read more about docker-compose in production.

Note: Ansible embeds docker commands that avoid installing docker-compose on the server but force to duplicate the docker architecture description. Although I didn’t use it for this project you might consider using it.

Bonus: continuous integration

This section explains how to automatically deploy on production when merging on the master branch if the build passes.

This is quite simple with circleCI and Docker Hub:

Here is a circle.yml file that runs the tests and deploys if the build passes provided the destination branch is master:


machine:
  services:
    - docker
  python:
    version: 2.7.8
  post:
    # circle instance already run postgresql
    - sudo service postgresql stop

dependencies:
  pre:
    - pip install ansible
    - pip install --upgrade setuptools

  override:
    - docker info
    - docker build -t nicgirault/cinelocal-api .

test:
  override:
    - docker-compose run api npm test

deployment:
  prod:
    branch: master
    commands:
      - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
      - docker push nicgirault/cinelocal-api
      - echo "openstack ansible_host=$PROD_HOST ansible_ssh_user=$PROD_USER" > devops/hosts/production
      - ansible-playbook -i devops/hosts/production devops/deploy.yml

In addition you’ll have to:

  • define the environment variables used in this file in the circleCI project settings page
  • authorize circleCI to deploy on your server:
    1. generate a ssh key pair (use the command ssh-keygen)
    2. add the private key on the project settings on circleCI interface
    3. add the public key on the ~/.ssh/autorized_keys on the server

From now deploying on production will be as simple as merging a branch to master.


You liked this article? You'd probably be a good match for our ever-growing tech team at Theodo.

Join Us

  • goovy.io

    Interesting article.
    When you mention “(it might crash the first time because the node container does not wait for the postgres container to be ready. It will work the second time. This is a known compose issue).”
    You should convert to the version 2 of compose and try depends_on instruction. See the doc https://docs.docker.com/compose/compose-file/#depends-on

  • Nicolas Girault

    Thank you for sharing this tip. I’ll test it

  • ni

    According to the docs, you shouldn’t be using docker-compose for production environments, at least not yet.

    https://docs.docker.com/v1.8/compose/production/ : “While Compose is not yet considered production-ready…”

  • Nicolas Girault

    This documentation is an old one! The latest documention doesn’t say so https://docs.docker.com/compose/production/ :-)

  • ni

    thanks! my fault.

  • KJ

    Hello – the latest Makefile in the repo seems to have no ansible-galaxy/ansible commands. Is there a reason you’ve switched completely to docker-compose in the Makefiles?

  • Nicolas Girault

    Hi KJ! Sorry for this late answer. You’re right, I updated the link to state of the repo at the time I wrote this article. Also ansible was working great, I am now using docker-machine which has advantages such as:
    – provisioning your server with docker (no more provisioning script)
    – avoid duplicating the dockerfile (docker-machine reads dockerfile)

  • msojda

    Hey Nicolas – nice article! The one thing which worries me a bit is that the codebase needs to live in one repo with the ansible playbook. Is there a way to separate it and run the ansible playbook from other repository (which contains only the ansible stuff) when the API build is green?

  • Nicolas Girault

    Hi! I would use git submodules

  • Naomi See

    Looks like your repo is gone. Is this obsolete already?

  • Nicolas Girault

    Sorry, I moved the repo to gitlab. I updated the link. Nevertheless I would advice you to use the last version of the repository using docker-machine! Thank you for the feedback

  • Nicolas Girault

    Thanks for the article

  • Marc

    Test ! Thank you for the article

  • Marc

    Test ! Thank you for the article

  • Nicolas Girault

    Nouveau test

  • Marc

    renouveau test

  • Marc

    renouveau test

  • Marc

    Bravo c’est gĂ©nial

  • Marc

    waou, bravo

  • Marc

    je fais un estt

  • Marc

    encore un test

  • Marc

    Un nouveau commentaire