I’m running a couple of clusters in my lab environment. One of them is a “hub” cluster which is handling all of the core, centralized services which shouldn’t be considered workloads. As part of the hub cluster setup, I wanted to have a secure KMS and there is no better choice than Hashicorp Vault. Vault has a great Helm chart for setup. Combining this chart with an ArgoCD Application CR gives you mostly everything you need.
The Complication
Running Vault on Kubernetes has a complication. The application runs as a StatefulSet, which means it’s running multiple pods. One of the core features of Vault is the fact that when it’s started, it’s “sealed”, meaning it can’t be accessed or used until it is “unsealed” by using the Unseal keys – https://developer.hashicorp.com/vault/docs/concepts/seal#seal-unseal. This poses a problem in K8S because of the ephemeral and transient nature of the workloads. They are meant to move around and when you are diligent about patching your cluster (you are patching your cluster, right?), that would mean that every time the Vault pods move, they will need to be unsealed again. So as part of this deployment, we will also be configuring the Vault pods to autounseal.
Setup AWS KMS
We are going to use AWS KMS for the autounseal functionality. For this, we will need to setup some access credentials, as AWS KMS instance and a root key for us to integrate to. You will want to create an IAM user and give that service account the permissions it needs to access the AWS KMS services. Once you have that, create an access key and copy those values somewhere safe. Then we will use those values to create a secret for Vault to use to integrate with AWS KMS. If you are using the AWS CLI, here’s the commands. You will want to take note of the “KeyId” and “Arn”.
aws kms create-key
aws iam create-user --user-name <username>
aws iam attach-user-policy \
--policy-arn arn:aws:iam::aws:policy/AWSKeyManagementServicePowerUser \
--user-name <username>
aws iam create-access-key --user-name <username>
Now that we have the infrastructure setup, let’s start on the OpenShift side. Let’s take the access key we just created and create a secret for it.
oc create namespace vault
oc create secret generic aws-credentials -n vault \
--from-literal=AWS_ACCESS_KEY_ID=<value> \
--from-literal=AWS_SECRET_ACCESS_KEY=<value>
End to End TLS? Absolutely
Another thing about the Vault setup is that we definitely want TLS across the entire network path. This means that we need to generate a TLS certificate and configure it in the server itself, then configure the OpenShift route to use TLS passthrough to ensure that all comms – internal and external – to the Vault server is secure.
For this, we use OpenShift cert-manager. For more information about OpenShift cert-manager setup, checkout my blog post here. And here’s our certificate.
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: vault-certificate
namespace: vault
spec:
commonName: vault.apps.hub.ocp.lab.snimmo.com
dnsNames:
- vault.apps.hub.ocp.lab.snimmo.com
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: letsencrypt-cluster-issuer
secretName: vault-tls
OpenShift Gitops
After installing the OpenShift GitOps operator, we can now apply an application CR to our cluster for the Vault helm chart with all our custom configuration. Notes are below.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: hashicorp-vault-application
namespace: openshift-gitops
spec:
project: default
destination:
server: 'https://kubernetes.default.svc'
namespace: vault
source:
repoURL: 'https://helm.releases.hashicorp.com'
targetRevision: 0.28.0
chart: vault
helm:
releaseName: vault
values: |
global:
openshift: true
tlsDisable: false
server:
extraSecretEnvironmentVars:
- envName: AWS_ACCESS_KEY_ID
secretName: aws-credentials
secretKey: AWS_ACCESS_KEY_ID
- envName: AWS_SECRET_ACCESS_KEY
secretName: aws-credentials
secretKey: AWS_SECRET_ACCESS_KEY
ha:
enabled: true
raft:
enabled: true
setNodeId: false
config: |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_key_file = "/vault/tls/tls.key"
tls_cert_file = "/vault/tls/tls.crt"
}
storage "raft" {
path = "/vault/data"
}
service_registration "kubernetes" {}
seal "awskms" {
region = "us-east-1"
kms_key_id = "<KeyId>"
}
affinity: {}
route:
enabled: true
host: vault.apps.hub.ocp.lab.snimmo.com
tls:
termination: passthrough
volumes:
- name: vault-tls
secret:
secretName: vault-tls
volumeMounts:
- name: vault-tls
readOnly: true
mountPath: "/vault/tls"
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
prune: true
selfHeal: true
ignoreDifferences:
- group: admissionregistration.k8s.io
kind: MutatingWebhookConfiguration
jqPathExpressions:
- .webhooks[]?.clientConfig.caBundle
One of the first things you will notice is the use of the helm values vs using the parameters. Honestly, because we are providing so much inline yaml for the Vault config directly, this is just the easier way to maintain the manifest.
Initial Unsealing
oc exec --stdin=true --tty=true vault-0 -- vault operator init -tls-skip-verify --namespace=vault
When this executes, it will output the recovery keys and initial root token. Copy those someplace safe.
Recovery Key 1: H<key>a
Recovery Key 2: Y<key>z
Recovery Key 3: M<key>c
Recovery Key 4: /<key>9
Recovery Key 5: C<key>4
Initial Root Token: hvs.3<key>P
Success! Vault is initialized
Recovery key initialized with 5 key shares and a key threshold of 3. Please
securely distribute the key shares printed above.
Now we need to unseal the initial vault pod. Run this command 3 times using 3 of the keys provided from your output.
oc exec -ti vault-0 -- vault operator unseal -tls-skip-verify
Now we just need to join the other two pods.
One comment
Comments are closed.