Install Hashicorp Vault on Kubernetes using Helm - Part 2
Marco Franssen /
10 min read • 1826 words
In part 1 we had a look at setting up our prerequisuites and running Hashicorp Vault on our local Kubernetes cluster. This time we will have a look at deploying Hashicorp Vault on a EKS cluster at AWS. This time we will deploy a Vault cluster in High Availability mode using Hashicorp Consul and we will use AWS KMS to auto unseal our Vault.
First lets have a look at the new tools we are about to introduce. If you didn't read part 1, you might consider reading that first to get a bit more underdstanding on the basics and tooling/prerequisuites we started with there.
Amazon Web Services is a subsidiary of Amazon providing on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis. We will be using 2 of their services to deploy our Vault High Available cluster to Kubernetes.
Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. Amazon EKS helps you provide highly-available and secure clusters and automates key tasks such as patching, node provisioning, and updates.
AWS Key Management Service (KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications. AWS KMS is a secure and resilient service that uses hardware security modules that have been validated under FIPS 140-2, or are in the process of being validated, to protect your keys. AWS KMS is integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs.
Consul automates networking for simple and secure application delivery.
- Terraform (optionally if you would like to automate the AWS resource creation).
- Kubectl (also ships with Docker Desktop)
As listed above, we will use 2 of Amazon Web Services their products, namely EKS and KMS. Using EKS you will be able to deploy a High available Kubernetes cluster. Using KMS we will be able to initialize Hashicorp Vault to leverage these Hardware security modules for unsealing.
We will also have to configure some policy and role that will allow our Vault deployment to access the KMS key.
To setup a EKS cluster you could leverage the Terraform module. This Terraform module also has an output,
kubeconfig that is very handy to retrieve a Kubernetes configuration to connect your
helm to the cluster or you can simply have it write the configuration to file using the
write_kubeconfig variable. Once you have the configuration written in a file you can simply use it from your terminal.
For this article I will focus on the specifics required to access the KMS key from our EKS cluster and outscope the specifics of provisioning the EKS cluster itself.
Consider the remainder of this setup to be based on an existing EKS setup where we will utilize [Terraform][terraform] to create the KMS key and define the policy that allows our Vault deployment to access the KMS key. (you could also create the resources as defined in Terraform by hand in the AWS console)
See following piece of Terraform configuration to create the resources.
With these resources we create a KMS key, which we will use to unlock
Hashicorp Vault. Furthermore we created an alias for the key which allows us to refer to the key by a more human readable name. We also create a IAM Role that we authorize to access the key. Our Hashicorp Vault deployment later on in this article will assume this role to get access to the KMS key. To be able to assume the role we define a policy that allows assuming the role as well we define a policy that allows us to configure finegrained permissions for our KMS key. As you might have noticed we also created two policy files in
json format. Let's also have a look at these.
Both files have some variables which we inject via the Terraform definitions. As you can see we are referencing the other resources as defined in Terraform. One of these is referring to a module cluster, which might differ in your setup. My cluster module is the module using the Terraform EKS module that provisioned my EKS cluster. This module also provisions a OIDC provider. Ensure to at least configure the
arn to your OIDC provider. If not provisioned via Terraform ensure to put this in place properly as well.
Hashicorp Consul is Hashicorp's service mesh. Using Consul we will make our Vault cluster Highly available by storing the Vault state in Consul's key value storage. As we have seen in part 1 we have been using Helm charts to deploy to our EKS cluster. In the Hashicorp Helm repository we can also find the Consul chart. To install consul I will go with the defaults, however you could check out the values.yaml for consul to see to what extend you want to finetune the deployment to your own needs.
To install Vault in High Availability mode we will create a new
values.yaml file that holds the configurations to use Consul and the KMS unseal. In this values.yaml file we will override some of the chart defaults to deploy Vault in HA mode utilizing Consul as well utilizing the KMS key.
Let's first create our
vault-ha.yaml file to configure our specific deployment needs for the Helm chart. Replace the variables in this file with the values of your IAM Role, your KMS Key alias and your AWS Region.
With this configuration file in place we can now deploy Vault to our EKS cluster.
Now the last thing remaining to do is to initialize our Vault.
As you can see we don't get any unseal keys this time. After all we are using AWS KMS key to unlock our Vault. However if we would loose access to the KMS key we can still use 3 out of these 5 Recovery keys to regain access to our Vault. Ensur to store this information in a safe place and preferably distribute these recovery keys accross a few places so not all keys for nuclear control are in the same place.
Using policies and IAM roles we can define finegrained access to AWS resources for our Kubernetes resources. There are many ways of achieving this, but I found this approach the most convenient and easiest to configure.
- AWS IAM Role gives finegrained persmissions on defined AWS Resources using a policy.
- Kubernetes resource assumes the IAM Role to utilize the finegrained persmissions as defined in the attached policy.
This way your Kubernetes resources can't do more within your AWS environment as they are supposed to do. Ensure you always configure these to be as finegrained as possible to not be subject to any security issues.
yaml files we can override the default values of a Helm chart to customize the deployment to our own needs. These value files I recommend to also commit them in your Git repository.
- AWS EKS
- AWS KMS
- Terraform Module EKS
I hope you enjoyed this second part to get a bit more insights on deploying Helm charts with custom configurations and specific access to cloud resources.