Go client for Elasticsearch using Docker

In this blog post I would like to cover the recently released Elasticsearch 7.0-rc1 Go client for Elasticsearch. In this blogpost I want to show you a small example with a simple Docker setup using to build a Elasticsearch cluster.

In my previous blogpost I covered some Docker tips and tricks we will utilize again in this blog post.

Initializing your project

To start with we first have to create a project folder. In this folder we will have to initialize our Go module, add our Dockerfile and docker-compose.yml. Once we have this in place we will be able to start our development. As I like as few mouseclicks as possible to get my projects bootstrapped I share below a few lines of shell script to easily create the outline of our project.

1
2
3
4
5
6
7
mkdir es-demo
cd es-demo
go mod init github.com/marcofranssen/es-demo
go get -u github.com/elastic/go-elasticsearch/v7@7.x
touch Dockerfile
touch docker-compose.yml
touch main.go

For our Dockerfile we will utilize the same definition as we discussed in our previous blogpost. See below for the contents of the Dockerfile.

Dockerfile

Dockerfileview raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
FROM golang:1.12-alpine as builder

# To fix go get and build with cgo
RUN apk add --no-cache --virtual .build-deps \
bash \
gcc \
git \
musl-dev

RUN mkdir build
COPY . /build
WORKDIR /build

RUN go get
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -o webserver .
RUN adduser -S -D -H -h /build webserver
USER webserver

FROM scratch
COPY --from=builder /build/webserver /app/
WORKDIR /app
EXPOSE 5000
CMD ["./webserver"]

In my previous blogpost I explain every single line of above Dockerfile, please have a look there if you want to have a better understanding on the details of this Dockerfile.

Docker-compose

Next I would like to put in place the contents of our docker-compose.yml, so we have our runtime in place from an infra and packaging point of view.

docker-compose.ymlview raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
version: "3.7"

services:
web:
image: go-docker-webserver
build: .
ports:
- "5000:5000"

es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.0
environment:
node.name: es01
cluster.initial_master_nodes: es01,es02
cluster.name: docker-cluster
bootstrap.memory_lock: "true"
ES_JAVA_OPTS: -Xms256m -Xmx256m
ulimits:
memlock:
soft: -1
hard: -1

es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.0
environment:
node.name: es02
discovery.seed_hosts: es01
cluster.initial_master_nodes: es01,es02
cluster.name: docker-cluster
bootstrap.memory_lock: "true"
ES_JAVA_OPTS: -Xms256m -Xmx256m
ulimits:
memlock:
soft: -1
hard: -1

As you can see we only open up the web container port to our host. That way all other services will not be directly accessible. We can access our api on port 5000.

Webserver boilerplate

Last but not least I would like to use the gracefull webserver which I have written about earlier as a starting point to build our api on top of Elasticsearch.

Dockerfileview raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
package main

import (
"context"
"flag"
"log"
"net/http"
"os"
"os/signal"
"time"
)

var (
listenAddr string
)

func main() {
flag.StringVar(&listenAddr, "listen-addr", ":5000", "server listen address")
flag.Parse()

logger := log.New(os.Stdout, "http: ", log.LstdFlags)

done := make(chan bool, 1)
quit := make(chan os.Signal, 1)

signal.Notify(quit, os.Interrupt)

server := newWebserver(logger)
go gracefullShutdown(server, logger, quit, done)

logger.Println("Server is ready to handle requests at", listenAddr)
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
logger.Fatalf("Could not listen on %s: %v\n", listenAddr, err)
}

<-done
logger.Println("Server stopped")
}

func gracefullShutdown(server *http.Server, logger *log.Logger, quit <-chan os.Signal, done chan<- bool) {
<-quit
logger.Println("Server is shutting down...")

ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()

server.SetKeepAlivesEnabled(false)
if err := server.Shutdown(ctx); err != nil {
logger.Fatalf("Could not gracefully shutdown the server: %v\n", err)
}
close(done)
}

func newWebserver(logger *log.Logger) *http.Server {
router := http.NewServeMux()
router.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
logger.Println(r.Method, r.URL.Path, r.RemoteAddr, r.UserAgent())
w.WriteHeader(http.StatusOK)
})

return &http.Server{
Addr: listenAddr,
Handler: router,
ErrorLog: logger,
ReadTimeout: 5 * time.Second,
WriteTimeout: 10 * time.Second,
IdleTimeout: 15 * time.Second,
}
}

Now with all of this in place we are able to boot up our infrastructure and have a first look on the setup from our browser.

NOTE: we didn’t do anything yet with the Go Elasticsearch client, but we will come to that once we have seen our infrastructure is working so far.

Now we can run out solution by doing docker-compose up --build It will build our api image and download the Elasticsearch docker containers. By accessing http://localhost:5000 you should be able to see a blank response with http status 200. You will also notice http://localhost:9200 is not available due to the fact we don’t expose the port from our docker network.

Time for code

Now we are ready to modify our webserver to interact with Elasticsearch. For that we will utilize the go-elasticsearch client we installed already in the first few steps using the shell commands. In order to use it we will add an import statement and I will add a new function to create a new Elasticsearch client.

main.go
1
2
3
import (
"github.com/elastic/go-elasticsearch/v7"
)

… left for brevity …

main.go
1
2
3
4
5
6
7
8
9
10
11
func newEsClient(logger *log.Logger, addresses []string) *elasticsearch.Client {
cfg := elasticsearch.Config{Addresses: addresses}
client, err := elasticsearch.NewClient(cfg)

if err != nil {
logger.Println(err)
panic(err)
}

return client
}

Now we can use our function to create a Elasticsearch client from our main function, so we can use it on our http server. Lets add another commandline flag to configure the elasticsearch addresses and create the client from this.

main.go
1
2
3
4
5
6
7
8
9
var (
addr string
esAddresses string
)

func main() {
flag.StringVar(&esAddresses, "es-addresses", "http://es01:9200,http://es02:9200", "elasticsearch addresses")
es := newEsClient(logger, strings.Split(esAddresses, ","))
}

With the last step I want to update our webserver function so we can return our Elasticsearch cluster info using the Elasticsearch client, when we call the endpoint. For that we will add the client as a parameter to our webserver and update the HttpHandler which registered at / to get the Elasticsearch cluster info.

main.go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
func newWebserver(logger *log.Logger, es *elasticsearch.Client) *http.Server {
router := http.NewServeMux()
router.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
logger.Println(r.Method, r.URL.Path, r.RemoteAddr, r.UserAgent())

read, write := io.Pipe()

go func() {
defer write.Close()
esInfo, err := es.Info()
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
} else {
defer esInfo.Body.Close()
io.Copy(write, esInfo.Body)
}
}()
io.Copy(w, read)
w.WriteHeader(http.StatusOK)
})

return &http.Server{
Addr: listenAddr,
Handler: router,
ErrorLog: logger,
ReadTimeout: 5 * time.Second,
WriteTimeout: 10 * time.Second,
IdleTimeout: 15 * time.Second,
}
}

In above example we are piping the Body of the Elasticsearch response to our httpWriter using io.Pipe() and io.Copy(). So above example is a nice way for you to be able to add your own logic like authentication on top of elasticsearch and for the rest just directly pipe the elasticsearch response to your api response. This is very memory efficient as you will not take the full response into memory and directly pipe it on your http response.

Last thing left is to change the function call to newWebserver, so we provide the Elasticsearch client to the function.

main.go
1
server := newWebserver(logger, esClient)

Let’s try out our example by running docker-compose up --build -d, this will rebuild web container and run Docker in background so we can more easily continue development on next example. To get access to console logs you can do docker-compose logs -f or docker-compose logs -f web to filter out only web container logs.

Searching an index

To give you a more tensible example I would like to show you how to search an indice as a starting point for your Elasticsearch Go journey.

First we will define a new HttpHandlerFunc on our http router which will forward out search request to ElasticSearch. In addition I will add a small helper function to build an elasticsearch query and some constant string for the constant part of our query.

main.go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
const searchMatch = `
"query": {
"multi_match": {
"query": %q,
"fields": ["lastName^100", "firstName^10", "country", "title"],
"operator": "and"
}
},
"highlight": {
"fields": {
"lastName": { "number_of_fragments": 0 },
"firstName": { "number_of_fragments": 0 },
"country": { "number_of_fragments": 0 },
"title": { "number_of_fragments": 0 }
}
},
"size": 25,
"sort": [{ "_score": "desc" }, { "_doc": "asc" }]`

func buildQuery(query string) io.Reader {
var b strings.Builder

b.WriteString("{\n")
b.WriteString(fmt.Sprintf(searchMatch, query))
b.WriteString("\n}")

return strings.NewReader(b.String())
}

func newWebserver(logger *log.Logger, es *elasticsearch.Client) *http.Server {
router := http.NewServeMux()
router.HandleFunc("/search", func(w http.ResponseWriter, r *http.Request) {
logger.Println(r.Method, r.URL.Path, r.RemoteAddr, r.UserAgent())

q := r.URL.Query().Get("q")

read, write := io.Pipe()

go func() {
defer write.Close()
res, err := es.Search(
es.Search.WithContext(r.Context()),
es.Search.WithIndex("people"),
es.Search.WithBody(buildQuery(q)),
es.Search.WithTrackTotalHits(true),
)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
} else {
defer res.Body.Close()
io.Copy(write, res.Body)
}
}()
io.Copy(w, read)
})

return &http.Server{
Addr: listenAddr,
Handler: router,
ErrorLog: logger,
ReadTimeout: 5 * time.Second,
WriteTimeout: 10 * time.Second,
IdleTimeout: 15 * time.Second,
}
}

Above code will add a handler for a /search endpoint that will take the querystring parameter q to build up the Elasticsearch search request which will utilize the advanced POST request where the query will be formed as a JSON body. Let’s try it out, docker-compose up --build -d. Now when you try to hit the API you will notice we are getting an error response as we don’t have a index called people yet. Try for yourself at http://localhost:5000/search?q=marco.

To resolve this we will add a bit more code to our project to create the index and add some initial records to Elasticsearch.

main.go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
import (
"github.com/elastic/go-elasticsearch/v7/esapi"
)

func bootstrap(es *elasticsearch.Client) error {
idx := "people"
ctx := context.Background()
_, err := esapi.IndicesDeleteRequest{Index: []string{idx}}.Do(ctx, es)
if err != nil {
return err
}
_, err2 := esapi.IndicesCreateRequest{Index: idx}.Do(ctx, es)
if err2 != nil {
return err2
}

people := make([]*Person, 0, 4)
people = append(people, &Person{
ID: "1",
Title: "Mr.",
FirstName: "Marco",
LastName: "Franssen",
Email: "marco.franssen@elasticsearch.com",
Country: "The Netherlands",
})
people = append(people, &Person{
ID: "2",
Title: "Mr.",
FirstName: "John",
LastName: "Doe",
Email: "john.doe@elasticsearch.com",
Country: "Neverland",
})
people = append(people, &Person{
ID: "3",
Title: "Mrs.",
FirstName: "Jane",
LastName: "Doe",
Email: "jane.doe@golang.org",
Country: "Neverland",
})
people = append(people, &Person{
ID: "4",
Title: "Mr.",
FirstName: "Rob",
LastName: "Pike",
Email: "rob.pike@golang.org",
Country: "Unknown",
})

for _, p := range people {
payload, err := json.Marshal(p)
if err != nil {
return err
}

_, err3 := esapi.CreateRequest{
Index: idx,
DocumentID: p.ID,
Body: bytes.NewReader(payload),
}.Do(ctx, es)
if err != nil {
return err3
}
}

return nil
}

Last but not least call this bootstrap function in your main function right after creating the Elasticsearch client.

main.go
1
2
3
4
err := bootstrap(esClient)
if err != nil {
panic(err)
}

Now we can run the solution again and hit some searches. docker-compose up -d --build and then run docker-compose logs -f to see your requests reach your handler. As you might have noticed before we only made it possible to search by firstname, lastname, country and title. For those searchable fields we also defined priorities in case your query matches one multiple of those fields to calculate the score. Some queries you could try for example:

For this last search you will not find any results as we don’t match on email addresses. Now go ahead yourself by editing the query template to also be able to search by email address, then rerun your code using docker-compose up -d --build to give the last search another try.

Resources

A collection of resources you might want to checkout to enrich your knowledge.

Thank you for reading my blogpost. I hope this gave you a good starting point to start developing your own Go Elasticsearch projects. Please share it on social media with your friends and colleagues. Oh and before I forget, you can download the entire solution here as a zip.

Share