Containerization

Helm Chart Development: From Basics to Production

A comprehensive guide to Helm chart development covering template syntax, values design, dependencies, hooks, testing, CI/CD integration, and a production-ready chart for Node.js applications.

Helm Chart Development: From Basics to Production

Helm is the package manager for Kubernetes, and once you move past deploying a handful of raw YAML manifests, it becomes essential. It solves the real problem of managing templated Kubernetes configurations across environments, handling release lifecycle (install, upgrade, rollback), and packaging complex applications with dependencies. If you are deploying anything beyond a single service to Kubernetes, you should be using Helm.

Prerequisites

Before working through this guide, you should have:

  • A working Kubernetes cluster (Docker Desktop, minikube, or a cloud-managed cluster)
  • kubectl installed and configured to talk to your cluster
  • Helm 3.x installed (brew install helm on macOS, choco install kubernetes-helm on Windows, or download from the Helm releases page)
  • Familiarity with Kubernetes resources (Deployments, Services, ConfigMaps, Secrets, Ingress)
  • Basic understanding of YAML and Go template syntax

Verify your setup:

helm version
# version.BuildInfo{Version:"v3.14.2", GitCommit:"...", GoVersion:"go1.21.7"}

kubectl cluster-info
# Kubernetes control plane is running at https://127.0.0.1:6443

What Helm Actually Solves

If you have deployed a Kubernetes application by hand, you know the pain. You have a Deployment YAML, a Service YAML, a ConfigMap, an Ingress, maybe a Secret, an HPA, a PDB. Each one has hardcoded values — the image tag, the replica count, resource limits, environment-specific URLs. Now multiply that by three environments (dev, staging, production). You end up with dozens of nearly identical YAML files, and changing one value means editing it in multiple places.

Helm solves three specific problems:

Templated manifests. Instead of maintaining separate YAML files per environment, you write templates with variables. A single chart produces correct manifests for any environment by swapping values files.

Release management. Helm tracks every deployment as a "release" with a revision history. You can see what is running, what changed, and when it was deployed. This is metadata that raw kubectl apply does not give you.

Rollback. When a deployment breaks production, helm rollback myapp 3 takes you back to revision 3 in seconds. No scrambling to find the old YAML, no manual kubectl apply of a previous version. Helm stores the full manifest for every revision.

Helm Architecture

Helm 3 has a simple architecture. There is no server-side component (Helm 2 had Tiller, which was a security nightmare — it is gone). Everything runs client-side.

Charts are packages. A chart is a directory of files that describe a set of Kubernetes resources. Think of it like an npm package, but for Kubernetes.

Releases are instances of a chart running in a cluster. When you helm install myapp ./my-chart, Helm creates a release called myapp. Each helm upgrade creates a new revision of that release.

Repositories are where charts are stored and shared. They work like npm registries — you add a repo, search for charts, and install them. Popular public repos include Bitnami and the artifact hub at artifacthub.io.

# Add the Bitnami repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

# Search for charts
helm search repo postgresql
# NAME                    CHART VERSION   APP VERSION   DESCRIPTION
# bitnami/postgresql      14.3.1          16.2.0        PostgreSQL is an object-relational database...

# Install a chart
helm install my-postgres bitnami/postgresql --set auth.postgresPassword=secret123

# List releases
helm list
# NAME            NAMESPACE   REVISION   UPDATED                    STATUS     CHART                APP VERSION
# my-postgres     default     1          2026-02-08 10:30:22        deployed   postgresql-14.3.1    16.2.0

Creating Your First Chart

The helm create command scaffolds a new chart with sensible defaults:

helm create myapp

This generates a complete chart directory. Let us look at what it creates.

Chart Directory Structure

myapp/
├── Chart.yaml          # Chart metadata (name, version, description)
├── values.yaml         # Default configuration values
├── charts/             # Directory for chart dependencies
├── templates/          # Kubernetes manifest templates
│   ├── deployment.yaml
│   ├── service.yaml
│   ├── ingress.yaml
│   ├── hpa.yaml
│   ├── serviceaccount.yaml
│   ├── _helpers.tpl    # Named template definitions (partials)
│   ├── NOTES.txt       # Post-install instructions shown to user
│   └── tests/
│       └── test-connection.yaml
└── .helmignore         # Files to exclude from packaging

Chart.yaml is the manifest. It defines the chart name, version, app version, and dependencies:

apiVersion: v2
name: myapp
description: A Helm chart for my Node.js application
type: application
version: 0.1.0       # Chart version - bump this on every change
appVersion: "1.0.0"  # Version of your application

The distinction between version and appVersion matters. version is the chart packaging version — you bump it whenever you change any template, value, or dependency. appVersion is the version of the application being deployed. They evolve independently.

values.yaml is the default configuration. Every variable your templates reference should have a default here:

replicaCount: 1

image:
  repository: myregistry/myapp
  pullPolicy: IfNotPresent
  tag: ""  # Overridden by appVersion if empty

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: false
  className: ""
  hosts:
    - host: myapp.local
      paths:
        - path: /
          pathType: ImplementationSpecific

resources:
  limits:
    cpu: 500m
    memory: 256Mi
  requests:
    cpu: 100m
    memory: 128Mi

Template Syntax Basics

Helm templates use Go's text/template package with Sprig functions bolted on. The syntax looks alien at first, but there are only a few patterns you need to master.

Accessing Values

Double curly braces {{ }} delimit template actions. The most common action is accessing values:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-app
  labels:
    app: {{ .Chart.Name }}
    version: {{ .Chart.AppVersion }}
spec:
  replicas: {{ .Values.replicaCount }}
  template:
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          ports:
            - containerPort: {{ .Values.service.port }}

The dot (.) is the root context. The built-in objects are:

  • .Values — values from values.yaml and overrides
  • .Release — release metadata (.Release.Name, .Release.Namespace, .Release.Revision)
  • .Chart — contents of Chart.yaml
  • .Template — current template info (.Template.Name, .Template.BasePath)
  • .Capabilities — cluster capability info (Kubernetes version, API versions)

Using include for Named Templates

The {{ include }} function renders a named template and returns the result as a string. This is how you reuse template fragments:

metadata:
  name: {{ include "myapp.fullname" . }}
  labels:
    {{- include "myapp.labels" . | nindent 4 }}

The include function is preferred over the template action because include returns a string that you can pipe through other functions (like nindent), while template writes directly to output.

Values.yaml Design

Good values design makes or breaks a chart. Here are the principles I follow after maintaining charts in production for years.

Sensible defaults. Every value should have a default that works for local development. A developer should be able to helm install myapp ./myapp with zero overrides and get a running instance.

Flat over nested, mostly. Do not nest values six levels deep. Two or three levels is the sweet spot. image.repository is fine. deployment.containers.primary.image.registry.hostname is absurd.

Use enable flags for optional components. Ingress, HPA, PDB, service monitors — these should all have enabled: false by default with a clean block of configuration underneath:

autoscaling:
  enabled: false
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 75

ingress:
  enabled: false
  className: nginx
  annotations: {}
  hosts:
    - host: chart-example.local
      paths:
        - path: /
          pathType: ImplementationSpecific
  tls: []

Override patterns. Users override values in three ways, and your chart should support all of them:

# 1. Command-line --set flags (for single values, CI/CD)
helm install myapp ./myapp --set image.tag=v2.1.0 --set replicaCount=3

# 2. Values files (for environment-specific configs)
helm install myapp ./myapp -f values-production.yaml

# 3. Multiple values files (layered, right-to-left precedence)
helm install myapp ./myapp -f values.yaml -f values-production.yaml -f values-secrets.yaml

The precedence order is: defaults in values.yaml < first -f file < second -f file < --set flags. Later sources override earlier ones.

Template Functions and Pipelines

Helm includes the Sprig template function library, which gives you about 70 utility functions. Here are the ones I use constantly.

default

Provides a fallback value:

image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"

quote

Wraps a value in double quotes. Essential for values that YAML might misinterpret:

env:
  - name: LOG_LEVEL
    value: {{ .Values.logLevel | quote }}
  - name: ENABLE_DEBUG
    value: {{ .Values.debug | quote }}  # Without quote, "true" becomes a boolean

toYaml and nindent

These two are your best friends for injecting blocks of YAML:

resources:
  {{- toYaml .Values.resources | nindent 12 }}

If values.yaml has:

resources:
  limits:
    cpu: 500m
    memory: 256Mi
  requests:
    cpu: 100m
    memory: 128Mi

The rendered output will be correctly indented at 12 spaces. The - in {{- trims whitespace before the action, preventing blank lines.

required

Fails rendering if a value is not provided:

env:
  - name: DATABASE_URL
    value: {{ required "A database URL is required (.Values.databaseUrl)" .Values.databaseUrl }}

tpl

Renders a string as a template. Useful when values themselves contain template expressions:

annotations:
  {{- range $key, $value := .Values.podAnnotations }}
  {{ $key }}: {{ tpl $value $ | quote }}
  {{- end }}

Conditionals and Loops

if/else

Conditional blocks control whether a resource or block is rendered:

{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ include "myapp.fullname" . }}
  {{- with .Values.ingress.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
spec:
  {{- if .Values.ingress.className }}
  ingressClassName: {{ .Values.ingress.className }}
  {{- end }}
  rules:
    {{- range .Values.ingress.hosts }}
    - host: {{ .host | quote }}
      http:
        paths:
          {{- range .paths }}
          - path: {{ .path }}
            pathType: {{ .pathType }}
            backend:
              service:
                name: {{ include "myapp.fullname" $ }}
                port:
                  number: {{ $.Values.service.port }}
          {{- end }}
    {{- end }}
  {{- if .Values.ingress.tls }}
  tls:
    {{- toYaml .Values.ingress.tls | nindent 4 }}
  {{- end }}
{{- end }}

Note the $ — inside a range loop, the dot context changes to the current iteration item. Use $ to access the root context.

range

Iterate over lists and maps:

env:
  {{- range $key, $value := .Values.env }}
  - name: {{ $key }}
    value: {{ $value | quote }}
  {{- end }}

With values.yaml:

env:
  NODE_ENV: production
  LOG_LEVEL: info
  PORT: "3000"

with

The with action sets the dot context to a value, and also acts as a conditional (the block is skipped if the value is empty):

{{- with .Values.nodeSelector }}
nodeSelector:
  {{- toYaml . | nindent 8 }}
{{- end }}

Named Templates and _helpers.tpl

The _helpers.tpl file is where you define reusable template fragments. Files starting with _ are not rendered as manifests — they are loaded for their definitions only.

{{/* templates/_helpers.tpl */}}

{{/*
Expand the name of the chart.
*/}}
{{- define "myapp.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
*/}}
{{- define "myapp.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{/*
Common labels
*/}}
{{- define "myapp.labels" -}}
helm.sh/chart: {{ include "myapp.chart" . }}
{{ include "myapp.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "myapp.selectorLabels" -}}
app.kubernetes.io/name: {{ include "myapp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{/*
Chart label
*/}}
{{- define "myapp.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}

The 63-character truncation is not arbitrary — Kubernetes names must be valid DNS labels, which are limited to 63 characters. This is one of those details you learn the hard way when a release name and chart name combine to exceed the limit and everything fails.

Managing Dependencies

Charts can depend on other charts. This is how you include PostgreSQL, Redis, or any other infrastructure alongside your application.

Declare dependencies in Chart.yaml:

apiVersion: v2
name: myapp
version: 0.1.0
appVersion: "1.0.0"

dependencies:
  - name: postgresql
    version: "14.x.x"
    repository: https://charts.bitnami.com/bitnami
    condition: postgresql.enabled
  - name: redis
    version: "18.x.x"
    repository: https://charts.bitnami.com/bitnami
    condition: redis.enabled

Then download them:

helm dependency update ./myapp
# Hang tight while we grab the latest from your chart repositories...
# ...Successfully got an update from the "bitnami" chart repository
# Saving 2 charts
# Downloading postgresql from repo https://charts.bitnami.com/bitnami
# Downloading redis from repo https://charts.bitnami.com/bitnami
# Deleting outdated charts

This downloads the dependency charts into charts/ as .tgz files. The Chart.lock file pins exact versions.

Configure dependencies through your parent chart's values.yaml:

postgresql:
  enabled: true
  auth:
    postgresPassword: changeme
    database: myapp
  primary:
    persistence:
      size: 10Gi

redis:
  enabled: false

The condition field in Chart.yaml ties to a values path — when postgresql.enabled is false, the entire PostgreSQL subchart is skipped.

Chart Hooks

Hooks let you run actions at specific points in a release lifecycle. They are regular Kubernetes resources (usually Jobs) with a special annotation.

Common hook points:

  • pre-install — before any release resources are created
  • post-install — after all resources are created
  • pre-upgrade — before resources are upgraded
  • post-upgrade — after resources are upgraded
  • pre-delete — before resources are deleted
  • post-delete — after resources are deleted

Here is a database migration hook that runs before every upgrade:

# templates/migration-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ include "myapp.fullname" . }}-migrate
  labels:
    {{- include "myapp.labels" . | nindent 4 }}
  annotations:
    "helm.sh/hook": pre-upgrade,pre-install
    "helm.sh/hook-weight": "-5"
    "helm.sh/hook-delete-policy": before-hook-creation
spec:
  backoffLimit: 3
  template:
    metadata:
      name: {{ include "myapp.fullname" . }}-migrate
    spec:
      restartPolicy: Never
      containers:
        - name: migrate
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          command: ["node", "migrate.js"]
          env:
            - name: DATABASE_URL
              valueFrom:
                secretKeyRef:
                  name: {{ include "myapp.fullname" . }}-secret
                  key: databaseUrl

The hook-weight controls execution order (lower runs first). The hook-delete-policy of before-hook-creation deletes the previous Job before creating a new one — essential because Kubernetes does not allow two Jobs with the same name.

Testing Charts

Helm provides three levels of testing. Use all of them.

helm lint

Static analysis that catches syntax errors, missing required fields, and common mistakes:

helm lint ./myapp
# ==> Linting ./myapp
# [INFO] Chart.yaml: icon is recommended
# [WARNING] templates/ingress.yaml: object name does not conform to Kubernetes naming requirements
#
# 1 chart(s) linted, 0 chart(s) failed

helm template

Renders templates locally without installing. This is the most useful debugging tool:

# Render all templates with default values
helm template myapp ./myapp

# Render with a specific values file
helm template myapp ./myapp -f values-production.yaml

# Render a specific template
helm template myapp ./myapp -s templates/deployment.yaml

# Show computed values
helm template myapp ./myapp --show-only templates/deployment.yaml --debug

I use helm template constantly during development. Pipe it through kubectl apply --dry-run=client -f - to also validate that the rendered YAML is valid Kubernetes:

helm template myapp ./myapp -f values-production.yaml | kubectl apply --dry-run=client -f -
# deployment.apps/myapp created (dry run)
# service/myapp created (dry run)
# ingress.networking.k8s.io/myapp created (dry run)

helm test

Integration tests that run inside the cluster. Define test pods in templates/tests/:

# templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "{{ include "myapp.fullname" . }}-test-connection"
  labels:
    {{- include "myapp.labels" . | nindent 4 }}
  annotations:
    "helm.sh/hook": test
spec:
  restartPolicy: Never
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args: ['{{ include "myapp.fullname" . }}:{{ .Values.service.port }}/healthz']

Run tests after install or upgrade:

helm test myapp
# NAME: myapp
# LAST DEPLOYED: Sat Feb  8 14:30:00 2026
# NAMESPACE: default
# STATUS: deployed
# TEST SUITE:     myapp-test-connection
# Last Started:   Sat Feb  8 14:30:05 2026
# Last Completed: Sat Feb  8 14:30:08 2026
# Phase:          Succeeded

Packaging and Publishing

Package your chart into a .tgz file for distribution:

helm package ./myapp
# Successfully packaged chart and saved it to: /path/to/myapp-0.1.0.tgz

Publishing to a Chart Repository

A Helm chart repository is simply an HTTP server hosting an index.yaml file and chart .tgz files. You can use GitHub Pages, an S3 bucket, or a dedicated registry like Harbor or ChartMuseum.

GitHub Pages approach:

# Create a gh-pages branch for hosting
git checkout --orphan gh-pages
git rm -rf .

# Add your packaged chart
cp /path/to/myapp-0.1.0.tgz .

# Generate the index
helm repo index . --url https://myorg.github.io/helm-charts

# Commit and push
git add .
git commit -m "Add myapp chart v0.1.0"
git push origin gh-pages

OCI registry approach (Helm 3.8+):

Modern Helm supports pushing charts to OCI-compliant registries like Docker Hub, GitHub Container Registry, and AWS ECR:

# Login to registry
helm registry login ghcr.io -u myuser

# Push the chart
helm push myapp-0.1.0.tgz oci://ghcr.io/myorg/helm-charts

# Install from OCI
helm install myapp oci://ghcr.io/myorg/helm-charts/myapp --version 0.1.0

I recommend the OCI approach for new projects. It reuses your existing container registry infrastructure and avoids maintaining a separate index.yaml.

Environment-Specific Values Files

This is where Helm really pays off. Maintain one chart, deploy to any environment with different values files.

myapp/
├── values.yaml            # Defaults (development)
├── values-dev.yaml        # Dev overrides
├── values-staging.yaml    # Staging overrides
├── values-production.yaml # Production overrides

values.yaml (defaults, suitable for local dev):

replicaCount: 1

image:
  repository: myregistry/myapp
  tag: latest

service:
  type: ClusterIP
  port: 3000

ingress:
  enabled: false

resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    cpu: 500m
    memory: 256Mi

autoscaling:
  enabled: false

env:
  NODE_ENV: development
  LOG_LEVEL: debug

values-production.yaml (production overrides):

replicaCount: 3

image:
  tag: ""  # Set by CI/CD via --set image.tag=$TAG

ingress:
  enabled: true
  className: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/rate-limit: "100"
  hosts:
    - host: api.myapp.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: myapp-tls
      hosts:
        - api.myapp.com

resources:
  requests:
    cpu: 250m
    memory: 512Mi
  limits:
    cpu: "1"
    memory: 1Gi

autoscaling:
  enabled: true
  minReplicas: 3
  maxReplicas: 15
  targetCPUUtilizationPercentage: 70

env:
  NODE_ENV: production
  LOG_LEVEL: warn

postgresql:
  primary:
    persistence:
      size: 100Gi
    resources:
      requests:
        cpu: 500m
        memory: 1Gi

Deploy to different environments:

# Development
helm upgrade --install myapp ./myapp -f values-dev.yaml -n dev

# Production
helm upgrade --install myapp ./myapp \
  -f values-production.yaml \
  --set image.tag=v2.3.1 \
  -n production

Helm in CI/CD Pipelines

GitHub Actions

# .github/workflows/deploy.yml
name: Deploy to Kubernetes
on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build and push Docker image
        uses: docker/build-push-action@v5
        with:
          push: true
          tags: ghcr.io/myorg/myapp:${{ github.sha }}

      - name: Set up Helm
        uses: azure/setup-helm@v3

      - name: Configure kubeconfig
        uses: azure/k8s-set-context@v3
        with:
          kubeconfig: ${{ secrets.KUBECONFIG }}

      - name: Deploy with Helm
        run: |
          helm dependency update ./chart
          helm upgrade --install myapp ./chart \
            -f ./chart/values-production.yaml \
            --set image.tag=${{ github.sha }} \
            --namespace production \
            --create-namespace \
            --wait \
            --timeout 5m

      - name: Run Helm tests
        run: helm test myapp -n production

Azure Pipelines

# azure-pipelines.yml
trigger:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

variables:
  imageTag: '$(Build.BuildId)'
  chartPath: './chart'
  releaseName: 'myapp'
  namespace: 'production'

steps:
  - task: Docker@2
    displayName: 'Build and push image'
    inputs:
      containerRegistry: 'myACR'
      repository: 'myapp'
      command: 'buildAndPush'
      Dockerfile: 'Dockerfile'
      tags: |
        $(imageTag)
        latest

  - task: HelmInstaller@0
    displayName: 'Install Helm'
    inputs:
      helmVersion: '3.14.2'

  - task: HelmDeploy@0
    displayName: 'Deploy with Helm'
    inputs:
      connectionType: 'Kubernetes Service Connection'
      kubernetesServiceConnection: 'my-aks-connection'
      namespace: $(namespace)
      command: 'upgrade'
      chartType: 'FilePath'
      chartPath: $(chartPath)
      releaseName: $(releaseName)
      overrideValues: 'image.tag=$(imageTag)'
      valueFile: '$(chartPath)/values-production.yaml'
      arguments: '--wait --timeout 5m0s --create-namespace'

  - script: |
      helm test $(releaseName) -n $(namespace)
    displayName: 'Run Helm tests'

The --wait flag is critical in CI/CD. It makes Helm wait until all resources are ready before reporting success. Without it, your pipeline reports success while pods are still starting (or crashing).

Chart Versioning and Semantic Versioning

Follow semantic versioning for your charts:

  • MAJOR — breaking changes to values schema (renamed keys, removed features)
  • MINOR — new features, new optional values, new templates
  • PATCH — bug fixes, documentation updates, non-breaking template changes

Keep version (chart version) and appVersion (application version) separate and independent. A chart bug fix bumps the chart version but not the app version. A new application release bumps the app version and might not change the chart at all.

# Chart.yaml
version: 2.3.1     # Chart packaging version
appVersion: "4.1.0" # Application version

Automate versioning in CI:

# Bump chart version before packaging
CURRENT=$(grep '^version:' Chart.yaml | awk '{print $2}')
# Use a tool like semver-cli or just increment with sed
NEW=$(echo $CURRENT | awk -F. '{printf "%d.%d.%d", $1, $2, $3+1}')
sed -i "s/^version: .*/version: $NEW/" Chart.yaml
helm package .

Complete Working Example: Node.js Express App with PostgreSQL

Here is a production-ready Helm chart for a Node.js Express application with a PostgreSQL dependency. This is the chart structure I use for real projects.

Chart.yaml

apiVersion: v2
name: express-api
description: A production Helm chart for a Node.js Express API
type: application
version: 1.0.0
appVersion: "1.0.0"
maintainers:
  - name: Shane
    email: [email protected]

dependencies:
  - name: postgresql
    version: "14.x.x"
    repository: https://charts.bitnami.com/bitnami
    condition: postgresql.enabled

values.yaml

replicaCount: 1

image:
  repository: myregistry/express-api
  pullPolicy: IfNotPresent
  tag: ""

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  create: true
  automount: true
  annotations: {}
  name: ""

service:
  type: ClusterIP
  port: 3000

ingress:
  enabled: false
  className: ""
  annotations: {}
  hosts:
    - host: api.local
      paths:
        - path: /
          pathType: Prefix
  tls: []

resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    cpu: 500m
    memory: 256Mi

autoscaling:
  enabled: false
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 75
  targetMemoryUtilizationPercentage: 80

livenessProbe:
  httpGet:
    path: /healthz
    port: http
  initialDelaySeconds: 15
  periodSeconds: 20
  failureThreshold: 3

readinessProbe:
  httpGet:
    path: /healthz
    port: http
  initialDelaySeconds: 5
  periodSeconds: 10
  failureThreshold: 3

env:
  NODE_ENV: development
  LOG_LEVEL: debug

secrets:
  databaseUrl: ""
  sessionSecret: ""

nodeSelector: {}
tolerations: []
affinity: {}

postgresql:
  enabled: true
  auth:
    postgresPassword: localdev
    database: express_api
  primary:
    persistence:
      size: 5Gi

templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "express-api.fullname" . }}
  labels:
    {{- include "express-api.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "express-api.selectorLabels" . | nindent 6 }}
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
        checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
      labels:
        {{- include "express-api.selectorLabels" . | nindent 8 }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "express-api.serviceAccountName" . }}
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        fsGroup: 1000
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: {{ .Values.service.port }}
              protocol: TCP
          envFrom:
            - configMapRef:
                name: {{ include "express-api.fullname" . }}-config
            - secretRef:
                name: {{ include "express-api.fullname" . }}-secret
          {{- with .Values.livenessProbe }}
          livenessProbe:
            {{- toYaml . | nindent 12 }}
          {{- end }}
          {{- with .Values.readinessProbe }}
          readinessProbe:
            {{- toYaml . | nindent 12 }}
          {{- end }}
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}

The checksum/config and checksum/secret annotations are a critical pattern. When a ConfigMap or Secret changes, these annotations change, which triggers a rolling restart of the pods. Without this, you can update a ConfigMap but your pods keep running with the old values until they happen to restart.

templates/service.yaml

apiVersion: v1
kind: Service
metadata:
  name: {{ include "express-api.fullname" . }}
  labels:
    {{- include "express-api.labels" . | nindent 4 }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.port }}
      targetPort: http
      protocol: TCP
      name: http
  selector:
    {{- include "express-api.selectorLabels" . | nindent 4 }}

templates/configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ include "express-api.fullname" . }}-config
  labels:
    {{- include "express-api.labels" . | nindent 4 }}
data:
  {{- range $key, $value := .Values.env }}
  {{ $key }}: {{ $value | quote }}
  {{- end }}
  {{- if .Values.postgresql.enabled }}
  DB_HOST: {{ include "express-api.fullname" . }}-postgresql
  DB_PORT: "5432"
  DB_NAME: {{ .Values.postgresql.auth.database | quote }}
  {{- end }}

templates/secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: {{ include "express-api.fullname" . }}-secret
  labels:
    {{- include "express-api.labels" . | nindent 4 }}
type: Opaque
data:
  {{- if .Values.secrets.databaseUrl }}
  DATABASE_URL: {{ .Values.secrets.databaseUrl | b64enc }}
  {{- else if .Values.postgresql.enabled }}
  DATABASE_URL: {{ printf "postgresql://postgres:%s@%s-postgresql:5432/%s" .Values.postgresql.auth.postgresPassword (include "express-api.fullname" .) .Values.postgresql.auth.database | b64enc }}
  {{- end }}
  {{- if .Values.secrets.sessionSecret }}
  SESSION_SECRET: {{ .Values.secrets.sessionSecret | b64enc }}
  {{- else }}
  SESSION_SECRET: {{ randAlphaNum 32 | b64enc }}
  {{- end }}

templates/hpa.yaml

{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: {{ include "express-api.fullname" . }}
  labels:
    {{- include "express-api.labels" . | nindent 4 }}
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: {{ include "express-api.fullname" . }}
  minReplicas: {{ .Values.autoscaling.minReplicas }}
  maxReplicas: {{ .Values.autoscaling.maxReplicas }}
  metrics:
    {{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
    {{- end }}
    {{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
    {{- end }}
{{- end }}

templates/ingress.yaml

{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ include "express-api.fullname" . }}
  labels:
    {{- include "express-api.labels" . | nindent 4 }}
  {{- with .Values.ingress.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
spec:
  {{- if .Values.ingress.className }}
  ingressClassName: {{ .Values.ingress.className }}
  {{- end }}
  {{- if .Values.ingress.tls }}
  tls:
    {{- range .Values.ingress.tls }}
    - hosts:
        {{- range .hosts }}
        - {{ . | quote }}
        {{- end }}
      secretName: {{ .secretName }}
    {{- end }}
  {{- end }}
  rules:
    {{- range .Values.ingress.hosts }}
    - host: {{ .host | quote }}
      http:
        paths:
          {{- range .paths }}
          - path: {{ .path }}
            pathType: {{ .pathType }}
            backend:
              service:
                name: {{ include "express-api.fullname" $ }}
                port:
                  number: {{ $.Values.service.port }}
          {{- end }}
    {{- end }}
{{- end }}

The Node.js Application Health Check

Your Express app needs to expose the /healthz endpoint that the probes hit:

var express = require("express");
var app = express();
var port = process.env.PORT || 3000;

// Health check endpoint for Kubernetes probes
app.get("/healthz", function(req, res) {
  // Check database connectivity
  var db = require("./db");
  db.query("SELECT 1")
    .then(function() {
      res.status(200).json({ status: "ok", timestamp: new Date().toISOString() });
    })
    .catch(function(err) {
      console.error("Health check failed:", err.message);
      res.status(503).json({ status: "unhealthy", error: err.message });
    });
});

app.listen(port, function() {
  console.log("Server listening on port " + port);
});

Deploying the Complete Example

# Development
helm dependency update ./express-api
helm upgrade --install express-api ./express-api -n dev --create-namespace

# Production
helm upgrade --install express-api ./express-api \
  -f ./express-api/values-production.yaml \
  --set image.tag=v1.2.3 \
  --set secrets.databaseUrl="postgresql://user:pass@prod-db:5432/myapp" \
  -n production \
  --create-namespace \
  --wait \
  --timeout 5m

# Check the release
helm status express-api -n production
# NAME: express-api
# LAST DEPLOYED: Sat Feb  8 15:00:00 2026
# NAMESPACE: production
# STATUS: deployed
# REVISION: 1

# Rollback if something breaks
helm rollback express-api 1 -n production

Common Issues and Troubleshooting

1. YAML Indentation Errors

Error: INSTALLATION FAILED: YAML parse error on myapp/templates/deployment.yaml:
error converting YAML to JSON: yaml: line 42: did not find expected key

This almost always means your nindent value is wrong. The number passed to nindent must match the exact indentation level where the block will be inserted. Use helm template to render locally and inspect the output. Count the spaces carefully.

2. Release Name Conflicts

Error: INSTALLATION FAILED: cannot re-use a name that is still in use

You tried to helm install with a name that already exists. Use helm upgrade --install instead — it installs if the release does not exist and upgrades if it does. This is the pattern you should always use in CI/CD.

If a failed install left a broken release behind:

helm list --all -n default
# Look for releases with STATUS: failed
helm uninstall myapp -n default

3. Template Rendering Nil Pointer

Error: template: myapp/templates/deployment.yaml:25:28: executing "myapp/templates/deployment.yaml"
at <.Values.resources.limits.cpu>: nil pointer evaluating interface {}.cpu

You referenced a deeply nested value that does not exist. The fix is to use conditionals or the default function:

# Bad - crashes if resources or limits is nil
cpu: {{ .Values.resources.limits.cpu }}

# Good - safe access with default
cpu: {{ .Values.resources.limits.cpu | default "500m" }}

# Better - use with block
{{- with .Values.resources }}
resources:
  {{- toYaml . | nindent 12 }}
{{- end }}

4. Immutable Field Errors on Upgrade

Error: UPGRADE FAILED: cannot patch "myapp" with kind Deployment:
Deployment.apps "myapp" is invalid: spec.selector: Invalid value:
... field is immutable

You changed a label selector between chart versions. Kubernetes does not allow modifying selectors on existing Deployments. The fix is to uninstall and reinstall, or to never change selector labels after the first deployment. This is why the scaffolded _helpers.tpl keeps selector labels minimal and stable.

5. Hook Job Not Cleaned Up

Error: INSTALLATION FAILED: pre-install hook failed: 1 error occurred:
* job already exists

A previous hook Job still exists in the cluster. Add the delete policy annotation:

annotations:
  "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded

6. Dependency Version Not Found

Error: no repository definition for https://charts.bitnami.com/bitnami.
Use `helm repo add` to add the missing repos.

You need to add the repository before updating dependencies:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm dependency update ./myapp

Best Practices

  • Always use helm upgrade --install instead of separate install/upgrade commands. It is idempotent and safe for CI/CD. Pair it with --wait and --timeout so your pipeline fails if the deployment does not become healthy.

  • Never put secrets in values files committed to Git. Use --set flags from CI/CD secret variables, external secret operators like External Secrets or Sealed Secrets, or inject secrets from a vault. The values-secrets.yaml file should be in .gitignore.

  • Pin dependency versions precisely. Use 14.3.1 instead of 14.x.x in production. The x wildcard is convenient for development but can pull in breaking changes at the worst possible moment. Run helm dependency update in CI and commit the Chart.lock file.

  • Use the checksum annotation pattern on Deployments to trigger restarts when ConfigMaps or Secrets change. Without this, pods will not pick up configuration changes until they are manually restarted.

  • Lint and render in CI before deploying. Run helm lint, then helm template | kubectl apply --dry-run=client -f - as a validation step. Catch errors before they hit a real cluster.

  • Version your chart independently from your application. The chart version tracks packaging changes. The app version tracks the application. Conflating them leads to unnecessary chart bumps or missing chart fixes.

  • Keep templates readable. Complex logic in Go templates becomes unreadable fast. If a template file exceeds 100 lines of template logic, break it into named templates in _helpers.tpl or split the resource into multiple files.

  • Set resource requests and limits for every container. Kubernetes scheduling depends on resource requests. Without them, the scheduler cannot make intelligent placement decisions, and a single greedy pod can starve the node. Make your defaults conservative and let production values files override them.

  • Use helm diff before upgrading. Install the helm-diff plugin (helm plugin install https://github.com/databus23/helm-diff). Running helm diff upgrade myapp ./myapp -f values-prod.yaml shows you exactly what will change before you apply it.

  • Document your values.yaml. Add a comment above every value explaining what it does and what valid values look like. Future you (and your teammates) will thank you when trying to figure out what podDisruptionBudget.maxUnavailable should be set to.

References

Powered by Contentful