Uptime Kuma is simple until you care about moving an existing setup without losing the data that actually matters. That is what this note was about.
The practical job here was:
- back up the existing application data
- install Uptime Kuma on the new Kubernetes cluster
- restore the old data set
- expose the service cleanly
1. Back Up the Existing Data
The application keeps its important state in /app/data, so the first move was to archive that directory from the running container:
| |
This is a small step, but it is the difference between “new install” and “real migration.”
2. Install the Chart
The chart install itself was straightforward:
| |
The note also captured a version bump after the first install because the old data set expected a newer image:
| |
3. Restore the Existing Data
Once the new deployment was up, copy the archive back into the pod:
| |
And then restore it inside the container:
| |
If I were rewriting this for a production runbook, I would probably make the stop, restore, and restart flow more explicit so the state transition is less hand-wavy.
4. Expose the Service
The original note changed the service to LoadBalancer:
| |
That is a very practical move on bare-metal or MetalLB-backed clusters because it gives the service a stable external IP without inventing more ingress plumbing than the job needs.
5. Restart and Verify
The restart sequence in the note was blunt and useful:
| |
I like notes like this because they preserve the exact thing you do when you just need the restored app to come back cleanly.
Closing Thought
This is not really a Helm post. It is a migration post disguised as a Helm post.
The install itself is easy. The part worth remembering is how to preserve the existing Uptime Kuma data and land it on the new cluster without turning the move into a rebuild.