Blog.

Using docker-compose for multi Docker container solutions

Marco Franssen

Marco Franssen /

9 min read1655 words

Cover Image for Using docker-compose for multi Docker container solutions

In this article I want to show you the way for running your multi container solution on Docker. Docker Compose is a tool for defining and running multiple Docker containers using a single command. With Compose, you use a docker-compose file to configure your applications services. Then, using a single command, you create and start all the services from your configuration.

In a previous article I have shown you how to setup a Docker development environment on Windows or Mac. In case you don't have a working Docker environment yet, you might want to start with this article first.

Why the wutt… I need more containers to run my solution

You might be thinking why you ever want to run your application across multiple containers. Do I really need this for my development environment? The short answer is, because it is a best practice to run only one process per container. The longer answer is:

You want to be as close as possible to production with your development environment. Furthermore you want to independently be able to scale your platform. So if you need a bigger Mysql cluster we want to be able to spin up more Mysql containers without having our application code in that same container. Having smaller containers might look like overkill, but in the long term it will enable you to only redeploy the containers that actually changed. With Docker containers we can achieve immutable infrastructure. Meaning if we don't rebuild a container it also won't change. Having smaller containers will therefore also help you in keeping as many parts as possible unchanged in your infrastructure. As we are testing against immutable infrastructure this will also bring less risk when we deploy the same Docker images to production.

In case you don't actually get how to build or run a single container you also might want to check out my previous article on running your Angular app in a Nginx container. This article will give you some insight on how to build a container and how to run a single container. So before going nuts in this article, make sure you have a basic understanding on how to run a single container.

How to install

Same as in the previous article I will be using a package manager to install the required tooling. Chocolatey for Windows or Homebrew for MacOS. You can use below commands respectively from your PowerShell or Bash.

Powershell
cinst -y docker-compose
terminal
brew install docker-compose

Now we have installed the tooling we are able to start using it.

How does it workzzz

As you should know by now, we use the Dockerfile to define the configuration for your Docker image. To configure the orchestration for building/running multiple containers, we use the docker-compose.yml file. As the file extension already reveals this file will contain yaml. In this file we will define which custom containers will be build and which containers will be retrieved from the registry. See below for an example yaml file.

docker-compose.json
version: "2"
services:
  web:
    build: web/
    ports:
      - "8080:80"
    volumes:
      - logvolume01:/var/log/nginx
    links:
      - api
  api:
    build: api/
    ports:
      - "3000:3000"
    volumes:
      - logvolume01:/var/log
    dependens_on:
      - cache
    links:
      - cache
  cache:
    image: redis
volumes:
  logvolume01: {}

It will build our web application from our previous blogpost. So the Angular2 sample app is located in a subfolder web. In the root of this directory our Dockerfile is located. In case you want to have more detail on the Dockerfile for the web image, have a look at my previous article. Furthermore we will also build an api image, using the Dockerfile in the api directory (hence the build: .). Once the container is build it will map port 3000 to the container exposed port 3000 (this will be defined in the Dockerfile). Then it will map the current directory to the /code directory in the container and it will map the logvolume01 folder to the containers log folder. Volumes are used to map folders to your containers to preserve the data when the container is destroyed. Furthermore there is a dependency specified to a cache container. The Redis container we will just be using the default image available from the Docker registry, instead of building our own.

To clarify things a little further I will also show you a Dockerfile.

Dockerfile
FROM node:6.3.1-onbuild
MAINTAINER [email protected]
EXPOSE 3000

As you can see we are using a Nodejs application that exposes its application on port 3000. Now it is just a matter of adding a Nodejs webserver in this very same folder. Make sure the npm start command is defined in your package.json. This npm script will be called when the container will call when launched (unless specified differently in your Dockerfile). See the CMD ["npm", "start"] in the Dockerfile we are deriving from.

How to apply in your own projects

Following approach could be a way to orchestrate the platform for your project. I assume you will be using Git. I suggest you to create one Git project that will contain your docker-compose.yml file and furthermore uses Git submodules for all your platform components. So image your platform requires a web frontend and some backend services, we can easily develop the web frontend and the backend services in their own Git repositories. The main Git project can then be used to version and orchestrate your platform as a whole. For example to coordinate your deployment of the platform as a whole.

Lets have a look at a potential folder structure.

my-platform
|-- .git
|-- .gitmodules
|-- angular-web
|   |-- .git
|   |-- .dockerignore
|   |-- Dockerfile
|   |-- gulpfile.js
|   |-- package.json
|   `-- src
|       |-- html
|       |   `-- index.html
|       |-- css
|       |   `- style.css
|       |-- img
|       `-- js
|           `-- app.js
|-- microservice-php
|   |-- .git
|   |-- .dockerignore
|   |-- composer.json
|   |-- composer.lock
|   |-- Dockerfile
|   |-- docker-compose.yml
|   `-- src
|       `-- index.php
|-- microservice-python
|   |-- .git
|   |-- .dockerignore
|   |-- Dockerfile
|   |-- docker-compose.yml
|   |-- requirements.txt
|   `-- runserver.py
|-- microservice-nodejs
|   |-- .git
|   |-- .dockerignore
|   |-- Dockerfile
|   |-- docker-compose.yml
|   |-- gulpfile.js
|   |-- package.json
|   `-- server.js
|-- microservice-java
|   |-- .git
|   |-- .dockerignore
|   |-- build.gradle
|   |-- Dockerfile
|   |-- docker-compose.yml
|   |-- gradle.properties
|   |-- gradlew
|   |-- gradlew.bat
|   |-- settings.gradle
|   `-- src
`-- docker-compose.yml

As you can see we also have a docker-compose.yml file in our backend services as we want to be able to run and test those services individually. The Compose file for those services will contain the build part to build the container for our service as well the dependencies for these services. For example a dependency on a Redis container or a MongoDB container.

The main docker-compose.yml file which assembles all the projects together could look something like this.

docker-compose.yml
version: "2"
services:
  web:
    build: angular-web
    ports:
      - "8080:80"
  oauth2:
    build: microservice-java
    ports:
      - "3000:3000"
    environment:
      KAFKA: http://backbone:9092
      MONGO: mongodb://mongodb:27017/oauth2
    depends_on:
      - cache
      - mongodb
      - backbone
    links:
      - cache
      - mongodb
      - backbone
  user-profile:
    build: microservice-nodejs
    ports:
      - "3001:3000"
    environment:
      KAFKA: http://backbone:9092
      MONGO: mongodb://mongodb:27017/users
    depends_on:
      - cache
      - mongodb
      - backbone
    links:
      - cache
      - mongodb
      - backbone
  blogs:
    build: microservice-php
    ports:
      - "3002:3000"
    environment:
      KAFKA: http://backbone:9092
      MONGO: mongodb://mongodb:27017/blogs
    depends_on:
      - cache
      - mysql
      - backbone
    links:
      - cache
      - mysql
      - backbone
  marketing-reports:
    build: microservice-python
    ports:
      - "3003:3000"
    environment:
      KAFKA: http://backbone:9092
    depends_on:
      - cache
      - graphs
      - backbone
    links:
      - cache
      - graphs
      - backbone
  cache:
    image: redis
  mongodb:
    image: mongodb
  mysql:
    image: mysql
  graphs:
    image: neo4j
  backbone:
    image: kafka

I guess you will be able to abstract the docker-compose.yml files for the individual backend services yourself. Just remove the parts that are not directly required for this individual service ;-).

The result of above docker-compose.yml will result in having our web application running on port 8080 and the backend services on the ports 3000, 3001, 3002, 3003. The containers for cache, MongoDB, Mysql, Neo4j and our backbone will take the containers from the Docker Hub. For our own code we will build an image to run the current version of our code, based on the Dockerfile available in the individual projects. The depends_on property will have the container wait starting until the referred containers are started. The links property enables the above containers to communicate within the internal network created by docker-compose. For these links there will be DNS records created so the containers can use those for example in the connection strings to the databases. In the above example all services expect these settings in environment variables.

If you want to run all the containers defined in your docker-compose.yml file, you simply enter following command in your shell. Also note that it will start following the logs in your shell. You can get back to your shell by doing ctrl+c, please do note that your containers will continue running in the background.

docker-compose up

If you want to stop all the containers defined in your docker-compose.yml file, you simply enter the following command in your shell.

docker-compose down

To view all the logs of the running containers you can simply enter following command. (Same as when running docker-compose up)

docker-compose logs

This is only the tip of the iceberg. With docker-compose --help you should be able to figure out more about docker-compose and how to use it. I hope this article gave you a good starting point to start exploring with docker-compose.

You have disabled cookies. To leave me a comment please allow cookies at functionality level.

More Stories

Cover Image for Responsive Adaptive Progressive impressive webpages

Responsive Adaptive Progressive impressive webpages

Marco Franssen

Marco Franssen /

In the last couple of years web applications technologies and frameworks went through a fast paced transformation and evolution. For all these evolutions there was coined a marketing term which (by coincidence) all end on …ive. So lets coin another one. In this article I'm going to explain you the basic concepts of all these principles which combined allow you to build an impressive web application. Responsive Web Design It started all back in (from top of my head) late 2010, with the idea of…

Cover Image for Upgrade Raspbian Jessie to RaspbianStretch

Upgrade Raspbian Jessie to RaspbianStretch

Marco Franssen

Marco Franssen /

Very recently I have upgraded my Raspberry 3 to the new Raspbian OS, code named "Stretch". Due to some security issues in the chipset of the Raspberry Pi 3 and Raspberry zero, I decided to upgrade mine to Raspbian Stretch, which resolves these security issues. Before you get yourself into any trouble make sure to have a backup of any important data and also please note I can't be responsible for any data loss. In this guide I also assume you know how to connect to your raspberry using ssh and h…

Cover Image for Run your Angular app in a Nginx Docker container

Run your Angular app in a Nginx Docker container

Marco Franssen

Marco Franssen /

Today you will learn how we can package our static html Angular app in a Docker container running Nginx. By packaging our app in a Docker container we will benefit from the fact that we will have some immutable infrastructure for our app. Immutability will give you many benefits when it boils down to maintaining a platform. Things that can not change state also can't lead to surprises in a later stage. Immutability is also well known in functional programming languages. I won't list all the adva…

Cover Image for Setting up Docker development environment on Windows/Mac

Setting up Docker development environment on Windows/Mac

Marco Franssen

Marco Franssen /

In this post I want to cover how you can setup a Docker development environment on Windows/Mac. As you might already know Docker requires a Linux kernel to run. Therefore we will always need a VM to run the actual Docker environment when you are on Windows or Mac OS. Hereby a few quotes from the Docker webpage to remember you what Docker is all about, or to give you a quick idea. Docker provides a common framework for developer and IT teams to collaborate on applications. With a clear separatio…