Skip to content

Configuring SSL certificates

Yellowbrick Manager and Yellowbrick instances come with self-signed certificates. If you would like to change these certificates to your own, you can do so with the following instructions.

WARNING

The final step of this process requires restarting the Yellowbrick instance/s, so a short period of scheduled downtime is necessary. This will be improved in a future release.

For illustrative purposes we use AWS Route53 and the LetsEncrypt ClusterIssuer from the cert-manager project. cert-manager also provides for other certificate sources such as Hashicorp Vault, AWS KMS and other cloud providers such as Azure and GCP.

Prerequisites

  • DNS Entries (A records or CNAMES) to Service IP Addresses for the manager and instance deployments. See Vanity DNS.
  • kubectl with kubernetes cluster access with ability add/edit secrets.

Setting up kubeconfig

To point to the correct Kubernetes cluster or set your kubeconfig:

AWS:

bash
aws eks --region $region update-kubeconfig --name $eksClusterName

EKS may require additional steps to grant your IAM user access to Kubernetes objects. For more information see Kubeconfig Setup on EKS.

Azure:

bash
az aks get-credentials --resource-group $resourceGroup --name $aksClusterName --admin --overwrite

GCP:

bash
gcloud container clusters get-credentials $clusterName --region=$region

Updating the Default Yellowbrick Manager Certificate

Setup

Step 1: Check the certificate before replacing

bash
openssl s_client -showcerts -connect ${yellowbrickManagerIp}:443 </dev/null
    ## Examples: 
    #    openssl s_client -showcerts -connect ***.***.***.168.com:443 </dev/null
    #    openssl s_client -showcerts -connect manager.xyz.dev.yellowbrickcloud.com:443 </dev/null

    ## Example output:
    #   Connecting to ***.***.***.168
    #   CONNECTED(00000005)
    #   depth=0 O=Yellowbrick Manager, CN=Yellowbrick Manager
    #   verify error:num=18:self-signed certificate
    #   verify return:1
    #   depth=0 O=Yellowbrick Manager, CN=Yellowbrick Manager
    #   ....
    #   ....

Step 2: Ensure you're pointed to the correct Kubernetes cluster

bash
kubectl config current-context

Step 3: Get the namespace of your Yellowbrick Manager. The namespace will be the one you created during installation. In the below output it will be yb-ns.

bash
kubectl get ns
# Example output: 
    # NAME              STATUS   AGE
    # default           Active   22d
    # kube-node-lease   Active   22d
    # kube-public       Active   22d
    # kube-system       Active   22d
    # monitoring        Active   22d
    # yb-ns             Active   22d

Step 4: Create the ClusterIssuer configured for your cloud providers DNS. The following example uses a DNS challange against AWS's Route53 The credentials for AWS are provided in the form of an AWS Key and Secret but any method of supported authentication such as roles may be used.

bash
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: secret-le-aws
  namespace: kube-system
  annotations:
    kubernetes.io/service-account.name: "default"
type: kubernetes.io/service-account-token
stringData:
  aws_secret_access_key: <redacted>
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-contour-cluster-issuer
spec:
  acme:
    email: "platform-ops@yellowbrick.com"
    privateKeySecretRef:
      name: acme-account-key
    server: https://acme-v02.api.letsencrypt.org/directory
    solvers:
      - dns01:
          route53:
            region: us-east-1
            accessKeyID: <redacted>
            secretAccessKeySecretRef:
              name: secret-le-aws
              key: aws_secret_access_key
EOF
bash
## Take a backup of the existing certificate secret and csr certificate
kubectl -n $NAMESPACE get secret yb-manager-tls -o yaml >> yb-manager-tls-original.yaml
kubectl -n $NAMESPACE get certificate yb-manager-cert -o yaml >> yb-manager-cert-original.yaml

Patch the certificate

bash
kubectl patch certificate yb-manager-cert -n $NAMESPACE --type='merge' -p '{"spec":{"dnsNames":["manager.yb.xip.net"], "commonName":"manager.yb.xip.net", "issuerRef":{"name":"letsencrypt-contour-cluster-issuer","kind":"ClusterIssuer"}}}'

or replace it

bash
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: yb-manager-cert
  namespace: $NAMESPACE
spec:
  secretName: yb-manager-tls 
  commonName: manager.yb.xip.net
  dnsNames:
    - manager.yb.xip.net
  issuerRef:
    name: letsencrypt-contour-cluster-issuer
    kind: ClusterIssuer
EOF

Since the cluster issuer is changing we must delete the CertificateRequest

bash
# get the name of the certificaterequest
kubectl get certificaterequests -n $NAMESPACE
    ## Example: kubectl -n $NAMESPACE delete certificaterequest ybinst-jeff-instance-tls-sgnj6


# delete the certificaterequest
kubectl delete certificaterequest yb-manager-cert-n4wfp-xxxx

Wait for the certificate to be ready.

bash
kubectl get certificate -n NAMESPACE -w

Restart the yb-manager pod

bash
kubectl -n $NAMESPACE get pods
kubectl -n $NAMESPACE delete pod yb-manager-xxxxx-xxxxxx
    ## Example: kubectl -n $NAMESPACE delete pod yb-manager-6bd6bfb946-sqmtc
kubectl -n $NAMESPACE get pods

Step 5: Check the replaced certificate

bash
openssl s_client -showcerts -connect ${yellowbrickManagerIp}:443 </dev/null

Updating the Default Instance Certificate

This will need to be performed for each instance.

Step 1: Take a backup of the existing instance-tls certificate before replacing it:

bash
kubectl -n $NAMESPACE get certificate ybinst-${dw-instance-name}-instance-tls -o yaml >> instance-tls-backup.yaml
    # Example: kubectl -n jpk-ns get certificate ybinst-jpk-instance-tls -o yaml >> instance-tls-backup.yaml

Step 2: Update the certificate. One for each instance.

bash
kubectl patch certificate ybinst-${dw-instance-name}-instance-tls -n $NAMESPACE --type='merge' -p '{"spec":{"dnsNames":["${dw-instance-name}.yb.xip.net"], "commonName":"${dw-instance-name}.yb.xip.net", "issuerRef":{"name":"letsencrypt-contour-cluster-issuer","kind":"ClusterIssuer"}}}'

or replace it

bash
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: ybinst-${dw-instance-name}-instance-tls
  namespace: $NAMESPACE
spec:
  secretName: ybinst-${dw-instance-name}-instance-tls 
  commonName: ${dw-instance-name}.yb.xip.net
  dnsNames:
    - ${dw-instance-name}.yb.xip.net
  issuerRef:
    name: letsencrypt-contour-cluster-issuer
    kind: ClusterIssuer
  keystores:
    jks:
      create: true
      passwordSecretRef:
        key: jks
        name: ybinst-${dw-instance-name}-tls-secret
    pkcs12:
      create: true
      passwordSecretRef:
        key: pkcs
        name: ybinst-${dw-instance-name}-tls-secret
EOF

Since the cluster issuer is changing we must delete the CertificateRequest

bash
# get the name of the certificaterequest
kubectl get certificaterequests -n $NAMESPACE
    ## Example: kubectl -n $NAMESPACE delete certificaterequest ybinst-jeff-instance-tls-sgnj6

# delete the certificaterequest
kubectl delete  certificate ${dw-instance-name}-instance-tls-xxxx

Wait for the certificate to be ready.

bash
kubectl get certificate -n NAMESPACE -w

Step 3: Restart the instance via kubectl

bash
# Restart each dw instance 
kubectl -n $instanceNamespace patch ybinstance/${dw-instance-name} --type=merge -p '{"spec":{"requestedState":"Restart"}}'

You can verify that the new SSL certificate is being used

bash
openssl s_client -starttls postgres -connect ${dw-instance-name}.yb.xip.net:5432 </dev/null