Install Hashicorp Vault on Kubernetes using Helm - Part 2

Marco Franssen

Marco Franssen /

10 min read1826 words

Cover Image for Install Hashicorp Vault on Kubernetes using Helm - Part 2

In part 1 we had a look at setting up our prerequisuites and running Hashicorp Vault on our local Kubernetes cluster. This time we will have a look at deploying Hashicorp Vault on a EKS cluster at AWS. This time we will deploy a Vault cluster in High Availability mode using Hashicorp Consul and we will use AWS KMS to auto unseal our Vault.

First lets have a look at the new tools we are about to introduce. If you didn't read part 1, you might consider reading that first to get a bit more underdstanding on the basics and tooling/prerequisuites we started with there.


Amazon Web Services is a subsidiary of Amazon providing on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis. We will be using 2 of their services to deploy our Vault High Available cluster to Kubernetes.


Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. Amazon EKS helps you provide highly-available and secure clusters and automates key tasks such as patching, node provisioning, and updates.


AWS Key Management Service (KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications. AWS KMS is a secure and resilient service that uses hardware security modules that have been validated under FIPS 140-2, or are in the process of being validated, to protect your keys. AWS KMS is integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs.

Hashicorp Consul

Service Mesh for any runtime or cloud

Consul automates networking for simple and secure application delivery.


  • Terraform (optionally if you would like to automate the AWS resource creation).
  • Kubectl (also ships with Docker Desktop)
  • Helm

Setup AWS

As listed above, we will use 2 of Amazon Web Services their products, namely EKS and KMS. Using EKS you will be able to deploy a High available Kubernetes cluster. Using KMS we will be able to initialize Hashicorp Vault to leverage these Hardware security modules for unsealing.

We will also have to configure some policy and role that will allow our Vault deployment to access the KMS key.

To setup a EKS cluster you could leverage the Terraform module. This Terraform module also has an output,kubeconfig that is very handy to retrieve a Kubernetes configuration to connect your kubectl and helm to the cluster or you can simply have it write the configuration to file using the write_kubeconfig variable. Once you have the configuration written in a file you can simply use it from your terminal.

$ KUBECONFIG=./my-eks-cluster/kubeconfig kubectl cluster-info
Kubernetes control plane is running at
CoreDNS is running at
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

For this article I will focus on the specifics required to access the KMS key from our EKS cluster and outscope the specifics of provisioning the EKS cluster itself.

Consider the remainder of this setup to be based on an existing EKS setup where we will utilize [Terraform][terraform] to create the KMS key and define the policy that allows our Vault deployment to access the KMS key. (you could also create the resources as defined in Terraform by hand in the AWS console)

See following piece of Terraform configuration to create the resources.
resource "aws_kms_key" "vault_ha_cluster" {
  description = "Key for unsealing my Vault HA setup using Consul"
  policy = templatefile("${path.module}/policies/kms.json", {
    vault_arn = aws_iam_role.vault.arn
  tags = {
    environment = local.environment
resource "aws_kms_alias" "vault_ha_cluster" {
  name          = "alias/vault-ha-cluster"
  target_key_id = aws_kms_key.vault_ha_cluster.key_id
resource "aws_iam_role" "vault" {
  name = "eks-${local.environment}-vault"
  assume_role_policy = templatefile("${path.module}/policies/assume-role-oidc.json", {
    openid_connect_provider_arn = module.cluster.oidc.provider_arn

With these resources we create a KMS key, which we will use to unlock Hashicorp Vault. Furthermore we created an alias for the key which allows us to refer to the key by a more human readable name. We also create a IAM Role that we authorize to access the key. Our Hashicorp Vault deployment later on in this article will assume this role to get access to the KMS key. To be able to assume the role we define a policy that allows assuming the role as well we define a policy that allows us to configure finegrained permissions for our KMS key. As you might have noticed we also created two policy files in json format. Let's also have a look at these.

  "Version" : "2012-10-17",
  "Id" : "key-cloud-healthcare-poc-eks-dev-cluster",
  "Statement" : [ {
    "Sid": "Enable IAM User Permissions",
    "Effect": "Allow",
    "Principal": {
        "AWS": "arn:aws:iam::759729069002:root"
    "Action": "kms:*",
    "Resource": "*"
  }, {
    "Sid" : "Allow access eks dev-cluster",
    "Effect" : "Allow",
    "Principal" : {
      "AWS" : "${eks_worker_nodes_role_arn}"
    "Action": [
    "Resource" : "*"
  } ]
  "Version": "2012-10-17",
  "Statement": [
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Federated": "${openid_connect_provider_arn}"
      "Action": "sts:AssumeRoleWithWebIdentity"

Both files have some variables which we inject via the Terraform definitions. As you can see we are referencing the other resources as defined in Terraform. One of these is referring to a module cluster, which might differ in your setup. My cluster module is the module using the Terraform EKS module that provisioned my EKS cluster. This module also provisions a OIDC provider. Ensure to at least configure the arn to your OIDC provider. If not provisioned via Terraform ensure to put this in place properly as well.

Install Hashicorp Consul

Hashicorp Consul is Hashicorp's service mesh. Using Consul we will make our Vault cluster Highly available by storing the Vault state in Consul's key value storage. As we have seen in part 1 we have been using Helm charts to deploy to our EKS cluster. In the Hashicorp Helm repository we can also find the Consul chart. To install consul I will go with the defaults, however you could check out the values.yaml for consul to see to what extend you want to finetune the deployment to your own needs.

$ helm -n my-consul install --create-namespace consul hashicorp/consul
W0725 20:08:30.419969    4478 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
W0725 20:08:30.792426    4478 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
NAME: consul
LAST DEPLOYED: Sun Jul 25 20:08:30 2021
NAMESPACE: my-consul
STATUS: deployed
Thank you for installing HashiCorp Consul!
Now that you have deployed Consul, you should look over the docs on using
Consul with Kubernetes available here:
Your release is named consul.
To learn more about the release, run:
  $ helm status consul
  $ helm get all consul

Install Vault in HA mode

To install Vault in High Availability mode we will create a new values.yaml file that holds the configurations to use Consul and the KMS unseal. In this values.yaml file we will override some of the chart defaults to deploy Vault in HA mode utilizing Consul as well utilizing the KMS key.

Let's first create our vault-ha.yaml file to configure our specific deployment needs for the Helm chart. Replace the variables in this file with the values of your IAM Role, your KMS Key alias and your AWS Region.

    enabled: true
      - host:
        paths: []
    enabled: true
    serviceType: LoadBalancer
    annotations: $VAULT_KMS_ROLE
    enabled: true
    config: |
      ui = true
      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      storage "consul" {
        path = "vault"
        address = "HOST_IP:8500"
      service_registration "kubernetes" {}
      seal "awskms" {
        kms_key_id = "$VAULT_KMS_KEY_ID"

With this configuration file in place we can now deploy Vault to our EKS cluster.

$ helm -n my-vault install --create-namespace -f ./vault-ha.yaml my-vault hashicorp/vault
NAME: my-vault
LAST DEPLOYED: Sun Jul 25 20:22:32 2021
NAMESPACE: my-vault
STATUS: deployed
Thank you for installing HashiCorp Vault!
Now that you have deployed Vault, you should look over the docs on using
Vault with Kubernetes available here:
Your release is named my-vault. To learn more about the release, try:
  $ helm status my-vault
  $ helm get manifest my-vault

Now the last thing remaining to do is to initialize our Vault.

$ kubectl --namespace my-vault exec -it vault-1625395823-0 -- vault operator init
Recovery Key 1: calc++d/XLg0O4o6N0eXDGYXggan0CZlRdWQLp4TYzsh
Recovery Key 2: i4LPPGJ36W+ZeF2TbeT1KCoyH2fGjBTA/Sx2siDm+Vxe
Recovery Key 3: G3+KaTAOExG+NVuKgz/CvqVedu90yypX3GEmQy8F4WB7
Recovery Key 4: BHpifWhHEmoYBe9nD/fN7AYrYbAke+zrH0kszsT44uDp
Recovery Key 5: bKHrdx+cnUQv0ix9FkTGVQ+a7B4Je/wDj3Z1T8E76ztD
Initial Root Token: s.CX1hTxMFCPP7NhvOOdK0YKRS
Success! Vault is initialized
Recovery key initialized with 5 key shares and a key threshold of 3. Please
securely distribute the key shares printed above.

As you can see we don't get any unseal keys this time. After all we are using AWS KMS key to unlock our Vault. However if we would loose access to the KMS key we can still use 3 out of these 5 Recovery keys to regain access to our Vault. Ensur to store this information in a safe place and preferably distribute these recovery keys accross a few places so not all keys for nuclear control are in the same place.

Key Takeaways

Using policies and IAM roles we can define finegrained access to AWS resources for our Kubernetes resources. There are many ways of achieving this, but I found this approach the most convenient and easiest to configure.

  • AWS IAM Role gives finegrained persmissions on defined AWS Resources using a policy.
  • Kubernetes resource assumes the IAM Role to utilize the finegrained persmissions as defined in the attached policy.

This way your Kubernetes resources can't do more within your AWS environment as they are supposed to do. Ensure you always configure these to be as finegrained as possible to not be subject to any security issues.

Using yaml files we can override the default values of a Helm chart to customize the deployment to our own needs. These value files I recommend to also commit them in your Git repository.


I hope you enjoyed this second part to get a bit more insights on deploying Helm charts with custom configurations and specific access to cloud resources.

You have disabled cookies. To leave me a comment please allow cookies at functionality level.

More Stories

Cover Image for Globally configure multiple git commit emails

Globally configure multiple git commit emails

Marco Franssen

Marco Franssen /

Have you ever been struggling to commit with the right email address on different repositories? It happened to me many times in the past, but for a couple of years I'm now using an approach that prevents me from making that mistake. E.g. when working on your work related machine, I'm pretty often also working on Opensource in my spare time, to build my own skills, and simply because I believe in the cause of Opensource. Also during work time I'm also sometimes contributing fixes back to Opensour…

Cover Image for Gitops using Helmsman to apply our Helm Charts to k8s

Gitops using Helmsman to apply our Helm Charts to k8s

Marco Franssen

Marco Franssen /

In my last blog series I have shown an example of deploying Hashicorp Vault on Kubernetes using Helm Charts (see references). This time I want to show you how to more easily integrate this into your … wait for it … :smile:, DevSecGitOps flow. Especially Helm charts help a lot in connecting the software part with our infrastructure / deployment (DevOps). Besides that we can embed all kind of security practices in our Helm charts like for example RBAC, Network policies etc. In this blog I want to…

Cover Image for Install Hashicorp Vault on Kubernetes using Helm - Part 1

Install Hashicorp Vault on Kubernetes using Helm - Part 1

Marco Franssen

Marco Franssen /

In this blogpost I want to show you how to deploy Hashicorp Vault using Helm on Kubernetes. We will look at deploying on your local machine for development and experimental purposes but also at how to deploy a high available setup on AWS using Hashicorp Consul and automated unsealing using a AWS KMS key. I assume most of you will know about Hashicorp Vault, Helm, Kubernetes and Consul and therefore I will not go very much in details on the tools themself. In this first article of the series we…

Cover Image for Upgrade your SSH security

Upgrade your SSH security

Marco Franssen

Marco Franssen /

As a DevOps engineer you are probably familiar with SSH keys and how to use them already. I wrote some blogs on SSH in the past as well see the references. This time I want to zoom in a bit on the encryption strength of your keys and the encryption types you can use. Why should you care about this? In todays world password are becoming more and more a security risk. In the near future Github for example will not support password authentication anymore for clone, push and pull actions, just lik…