Building a Elasticsearch cluster using Docker-Compose and Traefik
Marco Franssen /
6 min read • 1139 words
In a previous blog I have written on setting up Elasticsearch in docker-compose.yml already. I have also shown you before how to setup Traefik 1.7 in docker-compose.yml. Today I want to show you how we can use Traefik to expose a loadbalanced endpoint on top of a Elasticsearch cluster.
Simplify networking complexity while designing, deploying, and running
applications.
We will setup our cluster using docker-compose so we can easily run and cleanup this cluster from our laptop.
Create a Elasticsearch cluster
Lets first create a 2 node Elasticsearch cluster using the following docker-compose setup.
Now when we run this docker-compose setup you will be able to reach the first node at http://localhost:9200 and the second node at http://localhost:9201. Now for every node we would like to add to this cluster we simply would have to expose another port from our docker-environment to be able to connect directly with such a node.
A cleaner solution would be if we would just have to expose a single port to our host. When we connect to this single port, we want our request to be loadbalanced on any of the nodes in our cluster.
Add Traefik as Loadbalancer
Traefik has different configuration providers. One of them is Docker which allows to configure Traefik via Docker labels. Now let us first add the Traefik container.
In order for Traefik to be able to read the Docker labels we need to mount the docker.sock as a volume. We will also specify that we want to enable the Traefik Docker provider, and configure it to only include containers that are explicitly enabled using a Docker label. Last but not least we will enable the api, so we can also have a look at the Traefik Dashboard.
When we now run docker-compose up -d again you will be able to navigate to Traefik Dashboard. Here you can see an overview of routers, service and middleware for HTTP, TCP and UDP. At the moment we didn't configure any as we didn't specify the labels just yet on our elasticsearch containers.
Now let's define the labels on the Elasticsearch containers. For brevity I left the other properties of these Docker containers in the example below.
With the labels on these 2 containers we do the following:
Enable to container with Traefik
Listen on the default http (:80) entrypoint
Add a rule that will direct all traffic to http://localhost/es to one of the elasticsearch nodes
Register a middleware which will strip the /es prefix before forwarding the request.
Explicitly inform Traefik it has to connect on port 9200 of the Elasticsearch containers (required because Elasticsearch exposes port 9200 and 9300)
Now when we run docker-compose up -d again we will see the Elasticsearch containers will be reloaded. When navigating to the Traefik Dashboard you will now see a router, service and middleware has been configured. With all of this in place you can now access Elasticsearch at http://localhost/es. Refresh your browser a couple of times and notice you are being loadbalanced on the 2 Elasticsearch nodes.
If you update your hosts file with the following we can also access the elasticsearch cluster at http://elasticsearch which was the other rule we defined in the Traefik routing rule.
You can now also remove the port mappings from docker-compose.yml. So please go ahead and remove from both the containers the mapping for the ports.
Cerebro as your Elasticsearch admin interface
Last but not least I want to show you Cerebro which is a nice little admin tool to work with your Elasticsearch cluster. In the following docker-compose configuration we will expose Cerebro at http://localhost/admin.
Also here we enable the configuration in Traefik. Furthermore we enable a rule that will listen at http://localhost/admin. We also included a link that will define a network alias for our gateway container called elasticsearch. Remember, we defined previously a rule that listened for http://elasticsearch? Now we will utilize this in the Cerebro configuration which we mount into our container.
The two important settings for Cerebro to work properly with our Traefik setup are basePath configured as /admin/, because we run Cerebro at http://localhost/admin. Secondly we are utilizing the route elasticsearch which was defined as a Traefik routing rule and added as an alias for the gateway container.
Now you can run docker-compose up -d again. Give Cerebro a try at http://localhost/admin.
Summary
Below you can find the entire docker-compose.yml that was covered in this Blog.
Homework
Now last but not least you could add Kibana by yourself. Try to expose Kibana at http://localhost by defining a Traefik rule for Kibana. You could check here to get started with Kibana.
I hope you enjoyed this blog. As always please share this blog with your friends and colleagues and provide me with some feedback in the comments below.
You have disabled cookies. To leave me a comment please allow cookies at functionality level.
Many of you have probably been in a situation where you committed a file in your repository which you shouldn't have done in the first place. For example a file with credentials or a crazy big file that made your repository clones very slow. Now there are a lot of blogs and guides already available on how to get these files completely removed. It involves git filter-branch or bfg sourcery. In this blog I'm going to show you the new recommended way of doing this using git-filter-repo, which simpl…
In this blog I want to show you a nice new feature in Nginx 1.19 Docker image. I requested it somewhere 2 years ago when I was trying to figure out how I could configure my static page applications more flexibly with various endpoints to backing microservices. Back then I used to have my static pages fetch a json file that contained the endpoints for the apis. This way I could simply mount this json file into my container with all kind of endpoints for this particular deployment. It was some sor…
In my previous 2 blogs I have shown you how to build a HTTP/2 webserver. In these blogs we have covered self signed TLS certificates as well retrieving a Certificate via Letsencrypt. I mentioned there you will have to expose your server publicly on the internet. However I now figured out there is another way. So please continue reading.
Let's Encrypt is a free, automated, and open certificate authority brought to you by the nonprofit Internet Security Research Group (ISRG).
Letsencrypt impleme…
Pretty often I see developers struggle with setting up a webserver running on https. Now some might argue, why to run a webserver on https during development? The reason for that is simple. If you would like to benefit from HTTP/2 features like server push, utilizing the http.Pusher interface, you will need to run your webserver on HTTP/2. That is the only way how you can very early on in the development process test this. In this blog I'm showing you how to do that in Go using Letsencrypt and a…