Using docker-compose for multi Docker container solutions
Marco Franssen /
9 min read • 1655 words
In this article I want to show you the way for running your multi container solution on Docker. Docker Compose is a tool for defining and running multiple Docker containers using a single command. With Compose, you use a docker-compose file to configure your applications services. Then, using a single command, you create and start all the services from your configuration.
In a previous article I have shown you how to setup a Docker development environment on Windows or Mac. In case you don't have a working Docker environment yet, you might want to start with this article first.
Why the wutt… I need more containers to run my solution
You might be thinking why you ever want to run your application across multiple containers. Do I really need this for my development environment? The short answer is, because it is a best practice to run only one process per container. The longer answer is:
You want to be as close as possible to production with your development environment. Furthermore you want to independently be able to scale your platform. So if you need a bigger Mysql cluster we want to be able to spin up more Mysql containers without having our application code in that same container. Having smaller containers might look like overkill, but in the long term it will enable you to only redeploy the containers that actually changed. With Docker containers we can achieve immutable infrastructure. Meaning if we don't rebuild a container it also won't change. Having smaller containers will therefore also help you in keeping as many parts as possible unchanged in your infrastructure. As we are testing against immutable infrastructure this will also bring less risk when we deploy the same Docker images to production.
In case you don't actually get how to build or run a single container you also might want to check out my previous article on running your Angular app in a Nginx container. This article will give you some insight on how to build a container and how to run a single container. So before going nuts in this article, make sure you have a basic understanding on how to run a single container.
How to install
Same as in the previous article I will be using a package manager to install the required tooling. Chocolatey for Windows or Homebrew for MacOS. You can use below commands respectively from your PowerShell or Bash.
Now we have installed the tooling we are able to start using it.
How does it workzzz
As you should know by now, we use the Dockerfile to define the configuration for your Docker image. To configure the orchestration for building/running multiple containers, we use the docker-compose.yml
file. As the file extension already reveals this file will contain yaml. In this file we will define which custom containers will be build and which containers will be retrieved from the registry. See below for an example yaml file.
It will build our web application from our previous blogpost. So the Angular2 sample app is located in a subfolder web. In the root of this directory our Dockerfile is located. In case you want to have more detail on the Dockerfile for the web image, have a look at my previous article. Furthermore we will also build an api image, using the Dockerfile in the api directory (hence the build: .
). Once the container is build it will map port 3000 to the container exposed port 3000 (this will be defined in the Dockerfile). Then it will map the current directory to the /code directory in the container and it will map the logvolume01 folder to the containers log folder. Volumes are used to map folders to your containers to preserve the data when the container is destroyed. Furthermore there is a dependency specified to a cache container. The Redis container we will just be using the default image available from the Docker registry, instead of building our own.
To clarify things a little further I will also show you a Dockerfile.
As you can see we are using a Nodejs application that exposes its application on port 3000. Now it is just a matter of adding a Nodejs webserver in this very same folder. Make sure the npm start
command is defined in your package.json. This npm script will be called when the container will call when launched (unless specified differently in your Dockerfile). See the CMD ["npm", "start"]
in the Dockerfile we are deriving from.
How to apply in your own projects
Following approach could be a way to orchestrate the platform for your project. I assume you will be using Git. I suggest you to create one Git project that will contain your docker-compose.yml
file and furthermore uses Git submodules for all your platform components. So image your platform requires a web frontend and some backend services, we can easily develop the web frontend and the backend services in their own Git repositories. The main Git project can then be used to version and orchestrate your platform as a whole. For example to coordinate your deployment of the platform as a whole.
Lets have a look at a potential folder structure.
As you can see we also have a docker-compose.yml
file in our backend services as we want to be able to run and test those services individually. The Compose file for those services will contain the build part to build the container for our service as well the dependencies for these services. For example a dependency on a Redis container or a MongoDB container.
The main docker-compose.yml
file which assembles all the projects together could look something like this.
I guess you will be able to abstract the docker-compose.yml
files for the individual backend services yourself. Just remove the parts that are not directly required for this individual service ;-).
The result of above docker-compose.yml
will result in having our web application running on port 8080 and the backend services on the ports 3000, 3001, 3002, 3003. The containers for cache, MongoDB, Mysql, Neo4j and our backbone will take the containers from the Docker Hub. For our own code we will build an image to run the current version of our code, based on the Dockerfile
available in the individual projects. The depends_on property will have the container wait starting until the referred containers are started. The links property enables the above containers to communicate within the internal network created by docker-compose. For these links there will be DNS records created so the containers can use those for example in the connection strings to the databases. In the above example all services expect these settings in environment variables.
If you want to run all the containers defined in your docker-compose.yml
file, you simply enter following command in your shell. Also note that it will start following the logs in your shell. You can get back to your shell by doing ctrl+c
, please do note that your containers will continue running in the background.
If you want to stop all the containers defined in your docker-compose.yml
file, you simply enter the following command in your shell.
To view all the logs of the running containers you can simply enter following command. (Same as when running docker-compose up
)
This is only the tip of the iceberg. With docker-compose --help
you should be able to figure out more about docker-compose and how to use it. I hope this article gave you a good starting point to start exploring with docker-compose.