Building a Elasticsearch cluster using Docker-Compose and Traefik
Marco Franssen /
6 min read • 1133 words
In a previous blog I have written on setting up Elasticsearch in
docker-compose.yml already. I have also shown you before how to setup Traefik 1.7 in
docker-compose.yml. Today I want to show you how we can use Traefik to expose a loadbalanced endpoint on top of a Elasticsearch cluster.
Simplify networking complexity while designing, deploying, and running applications.
We will setup our cluster using docker-compose so we can easily run and cleanup this cluster from our laptop.
Create a Elasticsearch cluster
Lets first create a 2 node Elasticsearch cluster using the following docker-compose setup.
version: "3.7" services: es01: image: "docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.1" ports: - "9200:9200" - "9300:9300" environment: node.name: es01 discovery.seed_hosts: es02 cluster.initial_master_nodes: es01,es02 cluster.name: traefik-tutorial-cluster bootstrap.memory_lock: "true" ES_JAVA_OPTS: -Xms256m -Xmx256m volumes: - "es-data-es01:/usr/share/elasticsearch/data" ulimits: memlock: soft: -1 hard: -1 es02: image: "docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.1" ports: - "9201:9200" - "9301:9300" environment: node.name: es02 discovery.seed_hosts: es01 cluster.initial_master_nodes: es01,es02 cluster.name: traefik-tutorial-cluster bootstrap.memory_lock: "true" ES_JAVA_OPTS: -Xms256m -Xmx256m volumes: - "es-data-es02:/usr/share/elasticsearch/data" ulimits: memlock: soft: -1 hard: -1 volumes: es-data-es01: es-data-es02:
Now when we run this docker-compose setup you will be able to reach the first node at http://localhost:9200 and the second node at http://localhost:9201. Now for every node we would like to add to this cluster we simply would have to expose another port from our docker-environment to be able to connect directly with such a node.
A cleaner solution would be if we would just have to expose a single port to our host. When we connect to this single port, we want our request to be loadbalanced on any of the nodes in our cluster.
Add Traefik as Loadbalancer
Traefik has different configuration providers. One of them is Docker which allows to configure Traefik via Docker labels. Now let us first add the Traefik container.
version: "3.7" services: gateway: image: traefik:v2.2 command: - --api.insecure=true - --providers.docker=true - --providers.docker.exposedByDefault=false ports: - "80:80" - "8080:8080" volumes: - /var/run/docker.sock:/var/run/docker.sock:ro
In order for Traefik to be able to read the Docker labels we need to mount the docker.sock as a volume. We will also specify that we want to enable the Traefik Docker provider, and configure it to only include containers that are explicitly enabled using a Docker label. Last but not least we will enable the api, so we can also have a look at the Traefik Dashboard.
When we now run
docker-compose up -d again you will be able to navigate to Traefik Dashboard. Here you can see an overview of routers, service and middleware for HTTP, TCP and UDP. At the moment we didn't configure any as we didn't specify the labels just yet on our elasticsearch containers.
Now let's define the labels on the Elasticsearch containers. For brevity I left the other properties of these Docker containers in the example below.
es01: labels: - "traefik.enable=true" - "traefik.http.routers.elasticsearch.entrypoints=http" - "traefik.http.routers.elasticsearch.rule=Host(`localhost`) && PathPrefix(`/es`) || Host(`elasticsearch`)" - "traefik.http.routers.elasticsearch.middlewares=es-stripprefix" - "traefik.http.middlewares.es-stripprefix.stripprefix.prefixes=/es" - "traefik.http.services.elasticsearch.loadbalancer.server.port=9200" es02: labels: - "traefik.enable=true" - "traefik.http.routers.elasticsearch.entrypoints=http" - "traefik.http.routers.elasticsearch.rule=Host(`localhost`) && PathPrefix(`/es`) || Host(`elasticsearch`)" - "traefik.http.routers.elasticsearch.middlewares=es-stripprefix" - "traefik.http.middlewares.es-stripprefix.stripprefix.prefixes=/es" - "traefik.http.services.elasticsearch.loadbalancer.server.port=9200"
With the labels on these 2 containers we do the following:
- Enable to container with Traefik
- Listen on the default http (:80) entrypoint
- Add a rule that will direct all traffic to http://localhost/es to one of the elasticsearch nodes
- Register a middleware which will strip the
/esprefix before forwarding the request.
- Explicitly inform Traefik it has to connect on port 9200 of the Elasticsearch containers (required because Elasticsearch exposes port
Now when we run
docker-compose up -d again we will see the Elasticsearch containers will be reloaded. When navigating to the Traefik Dashboard you will now see a router, service and middleware has been configured. With all of this in place you can now access Elasticsearch at http://localhost/es. Refresh your browser a couple of times and notice you are being loadbalanced on the 2 Elasticsearch nodes.
If you update your hosts file with the following we can also access the elasticsearch cluster at http://elasticsearch which was the other rule we defined in the Traefik routing rule.
127.0.0.1 localhost elasticsearch
You can now also remove the port mappings from
docker-compose.yml. So please go ahead and remove from both the containers the mapping for the ports.
es01: ports: - 9200:9200 - 9300:9300 es02: ports: - 9201:9200 - 9301:9300
Cerebro as your Elasticsearch admin interface
Last but not least I want to show you Cerebro which is a nice little admin tool to work with your Elasticsearch cluster. In the following docker-compose configuration we will expose Cerebro at http://localhost/admin.
cerebro: image: lmenezes/cerebro:0.8.5 volumes: - "./conf/cerebro/application.conf:/opt/cerebro/conf/application.conf" depends_on: - gateway links: - "gateway:elasticsearch" labels: - "traefik.enable=true" - "traefik.http.routers.admin.entrypoints=http" - "traefik.http.routers.admin.rule=Host(`localhost`) && PathPrefix(`/admin`)" - "traefik.http.services.cerebro.loadbalancer.server.port=9000"
Also here we enable the configuration in Traefik. Furthermore we enable a rule that will listen at http://localhost/admin. We also included a link that will define a network alias for our gateway container called
elasticsearch. Remember, we defined previously a rule that listened for http://elasticsearch? Now we will utilize this in the Cerebro configuration which we mount into our container.
The two important settings for Cerebro to work properly with our Traefik setup are
basePath configured as
/admin/, because we run Cerebro at http://localhost/admin. Secondly we are utilizing the route
elasticsearch which was defined as a Traefik routing rule and added as an alias for the gateway container.
Now you can run
docker-compose up -d again. Give Cerebro a try at http://localhost/admin.
Below you can find the entire docker-compose.yml that was covered in this Blog.
Now last but not least you could add Kibana by yourself. Try to expose Kibana at http://localhost by defining a Traefik rule for Kibana. You could check here to get started with Kibana.
- Docker Elastic Stack - Getting Started Guide
- Traefik Docker Provider
- Github repository
I hope you enjoyed this blog. As always please share this blog with your friends and colleagues and provide me with some feedback in the comments below.