Working with Helm

We started working with Manifest files, which works very well for small apps as our demo app. But let’s face it - in reality, applications are way more complex and hence deploying and managing them brings new challenges. Here, Helm comes into play. Helm is a package manager for Kubernetes that makes our lifes easier. A Helm chart is a package that contains all the necessary resources to deploy an application to a Kubernetes cluster.

So let’s get started by installing helm first. If you are using Homebrew you can simply run

brew install helm

Here you can find other installation options.

Using a predefined helm chart

First, we will combine our application with a nginx-ingress controller which is using NGINX as a reverse proxy and load balancer.

To install the controller, we first need to initilize a Helm chart repository for our ingress-nginx via

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

By running helm search repo ingress-nginx you can check all charts you are able to install - which is just one in our case.

Having our repository in place, we can now install our nginx-ingress controller by running

helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx --create-namespace

Please be patient - the installation can take some time :)

To see the controller in action in our basic local setup, we also need to add a local DNS entry in our /etc/hosts. For this, run

sudo vi /etc/hosts

Enter your password to get access.

In the vim editor, press i to be able to insert the following entry:

127.0.0.1 hello.from.bnerd

When you are done, press escape first to quit the insert mode and then :wq to exit the vim editor.

Now we are ready to get back to our manifest.yaml file and to configure our Ingress. Please add the following lines to the end of your file:

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-world-ingress
spec:
  ingressClassName: nginx
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: hello-world
                port:
                  number: 3000
      host: hello.from.bnerd

Apply your changes via

kubectl apply -f manifest.yaml

Let’s have a look if we can access our app on http://hello.from.bnerd as defined in our ingress host configuration. Awesome, there it is 🎉

Creating our own helm chart

Up to now we were using a predefined helm chart provided, but it is also possible to package our own demo application with Helm. To avoid confusion, lets take down our current pods and configurations via

kubectl delete -f manifest.yaml

Now we can get started with creating a basic helm chart via running

helm create hello-world

Cd into hello-world and let’s explore what helm provided:

  • chart.yaml: Defines the very basics of our chart.
  • values.yaml: Specifies the values we want to be used.
  • templates directory: Contains all configurations for the application that will be deployed into the cluster and uses the values we specified in the values.yaml file.
  • charts directory: Is empty by default and can be used to add additional charts that are needed to deploy an application.

So let us configure the values.yaml file for the deployment of our app. For the sake of simplicity, lets use our app version without environment variables and replace the current values as follows:

# Default values for hello-world.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: vtrhh/hello-world-app
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: "latest"

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podAnnotations: {}

podSecurityContext:
  {}
  # fsGroup: 2000

securityContext:
  {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

service:
  type: LoadBalancer
  port: 3000

ingress:
  enabled: false
  className: ""
  annotations:
    {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths:
        - path: /
          pathType: ImplementationSpecific
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources:
  {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

affinity: {}

⚠️ If you are a linux user, please remember to use the following image in line 8:

repository: vtrhh/hello-world-app:amd64

To install the app via helm, ensure that you are in the root of your hello-world directory and run:

helm upgrade --install hello-world-helm .

Open http://localhost:3000 and there is our app again 🎉

Combining our helm chart with the nginx-ingress controller

Remember the nginx-ingress controller from the beginning of this lab? It is still up and running, so why not also using it in our helm chart?

We just need to adjust a few lines in our values.yaml file, starting at line 46:

  • set “enabled” to true
  • update “className” to nginx
  • update “host” to hello.from.bnerd (line 53)
  • set “pathType” to Prefix (line 56)

Apply your changes via

helm upgrade --install hello-world-helm .

from the root of your hello-world directory and visit http://hello.from.bnerd. 🎉

Adding ConfigMap and Secret

Let’s now also add the configurations from Lab 7 to our helm chart so we are able to pass environment variables and secrets via Helm.

First, we need to use the correct image of our application to the values.yaml file by changing the image starting at line 7 as follows:

image:
  repository: vtrhh/hello-world-app
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: "v3"

⚠️ As a linux user, please remember to use the tag v3-amd64.

Run

helm upgrade --install hello-world-helm .

again and check the logs of the updated Pod. Again seeing the following?

DockerDesktop Logs

Great! We can now add our ConfigMap and Secret within the templates folder. CD into the folder and create the files:

cd templates
touch configmap.yaml secret.yaml

Then add the following configurations to your files:

configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-configmap
data:
  ENVIRONMENT: test

secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: {{ .Release.Name }}-secret
type: Opaque
data:
  SUPER_SECRET_STRING: c3VwZXJfZHVwZXJfc2VjcmV0X3N0cmluZw==

Great, configurations are up, but we also need to let the deployment.yaml file use them. Hence navigate to your deployment.yaml file within the templates folder and replace it with the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "hello-world.fullname" . }}
  labels:
    {{- include "hello-world.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "hello-world.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        {{- include "hello-world.labels" . | nindent 8 }}
        {{- with .Values.podLabels }}
        {{- toYaml . | nindent 8 }}
        {{- end }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "hello-world.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: {{ .Values.service.port }}
              protocol: TCP
          env:
            - name: ENVIRONMENT
              valueFrom:
                configMapKeyRef:
                  name: {{ .Release.Name }}-configmap
                  key: ENVIRONMENT
            - name: SUPER_SECRET_STRING
              valueFrom:
                secretKeyRef:
                  name: {{ .Release.Name }}-secret
                  key: SUPER_SECRET_STRING
          livenessProbe:
            {{- toYaml .Values.livenessProbe | nindent 12 }}
          readinessProbe:
            {{- toYaml .Values.readinessProbe | nindent 12 }}
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
          {{- with .Values.volumeMounts }}
          volumeMounts:
            {{- toYaml . | nindent 12 }}
          {{- end }}
      {{- with .Values.volumes }}
      volumes:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}

As you can see in line 43, we added the reference to our ConfigMap and Secret as environment to our containers. We are all set to run

helm upgrade --install hello-world-helm .

And voila, there are our values again in the logs :)

The thing is - one of the advantages of Helm is, that we are able to package our configurations so that we only need to adapt the values set in values.yaml instead of adapting single values in different files in the templates folder. So lets do some minor refactoring to make this work for our helm chart, too.

Navigate to the values.yaml file and add the following lines at the end:

appEnv:
  environment: test
  superSecretString: super_duper_secret_string

Then adapt the references in our configmap.yaml and secret.yaml as follows:

configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-configmap
data:
  ENVIRONMENT: {{ .Values.appEnv.environment }}

secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: {{ .Release.Name }}-secret
type: Opaque
data:
  SUPER_SECRET_STRING: {{ .Values.appEnv.superSecretString | b64enc }}

Run

helm upgrade --install hello-world-helm .

and yay - values are still there, but now coming from the values.yaml file 🎉

Handling secrets in the “real world”

For our workshop setup, we are all good now - but in daily business we would push our code to a remote respository, right?

☝️ In the current setup, the secret is unencrypted and should not be pushed to any repository as it is!

Setting up encryption would go beyond the scope of this workshop, but for reference, the following options could be used to handle the secret:

  • Encrypt the file containing the secret value, e.g. by SOPS
  • Use tools for External Secrets Management such as Hashicorp Vault, AWS Secrets Manager, etc.
  • Define secrets as environment variables in your CI/CD system (e.g., GitHub Actions, GitLab CI)