Uptime Kuma is simple until you care about moving an existing setup without losing the data that actually matters. That is what this note was about.

The practical job here was:

  • back up the existing application data
  • install Uptime Kuma on the new Kubernetes cluster
  • restore the old data set
  • expose the service cleanly

1. Back Up the Existing Data

The application keeps its important state in /app/data, so the first move was to archive that directory from the running container:

1
2
3
4
kubectl exec -it deployment/uptime-kuma -n monitoring -- \
  bash -c 'tar -C /app/data -czpf /app/data.tgz .'

kubectl cp -n monitoring <uptime-kuma-pod>:/app/data.tgz .

This is a small step, but it is the difference between “new install” and “real migration.”

2. Install the Chart

The chart install itself was straightforward:

1
2
3
4
5
6
helm repo add uptime-kuma https://helm.irsigler.cloud
helm repo update

helm install uptime-kuma uptime-kuma/uptime-kuma \
  --namespace monitoring \
  --create-namespace

The note also captured a version bump after the first install because the old data set expected a newer image:

1
2
3
helm upgrade uptime-kuma uptime-kuma/uptime-kuma \
  -n monitoring \
  --set image.tag=<target-version>

3. Restore the Existing Data

Once the new deployment was up, copy the archive back into the pod:

1
2
kubectl cp -n monitoring data.tgz <uptime-kuma-pod>:/app/data.tgz
kubectl exec -it deployment/uptime-kuma -n monitoring -- bash

And then restore it inside the container:

1
2
3
cp data.tgz data/
cd data
tar -xf data.tgz

If I were rewriting this for a production runbook, I would probably make the stop, restore, and restart flow more explicit so the state transition is less hand-wavy.

4. Expose the Service

The original note changed the service to LoadBalancer:

1
2
kubectl patch svc uptime-kuma -n monitoring --type='json' \
  -p '[{"op":"replace","path":"/spec/type","value":"LoadBalancer"}]'

That is a very practical move on bare-metal or MetalLB-backed clusters because it gives the service a stable external IP without inventing more ingress plumbing than the job needs.

5. Restart and Verify

The restart sequence in the note was blunt and useful:

1
2
3
kubectl scale deployment -n monitoring uptime-kuma --replicas=0
kubectl scale deployment -n monitoring uptime-kuma --replicas=1
kubectl get pods -n monitoring -w

I like notes like this because they preserve the exact thing you do when you just need the restored app to come back cleanly.

Closing Thought

This is not really a Helm post. It is a migration post disguised as a Helm post.

The install itself is easy. The part worth remembering is how to preserve the existing Uptime Kuma data and land it on the new cluster without turning the move into a rebuild.