This note needs heavy redaction, so I am keeping it as a first-pass technical draft rather than pretending it is ready for publication.

The useful part of the source material is the deployment pattern:

  • a small Kubernetes cluster
  • MetalLB for service exposure
  • PostgreSQL as the database backend
  • a search backend that FusionAuth can actually talk to cleanly

1. Start with the Cluster Requirements

The note assumed a dedicated Kubernetes cluster, not just a namespace on an already noisy general-purpose platform.

That is a sensible choice for an authentication service. Even when the cluster is small, isolation and predictability matter.

2. Add MetalLB Early

One of the early setup steps was MetalLB:

1
2
3
helm repo add metallb https://metallb.github.io/metallb
helm repo update
helm install --wait metallb metallb/metallb --namespace loadbalancer --create-namespace

That is a very practical choice on bare-metal or VM-based clusters where you still want stable service exposure without inventing extra proxy layers too early.

3. Bring Up PostgreSQL First

The original note treated PostgreSQL as a prerequisite, which is the right mindset.

From an operations perspective, I care about these checkpoints before FusionAuth itself:

  • the PostgreSQL deployment is healthy
  • the service is reachable from inside the cluster
  • the database and role are created explicitly
  • the storage class choice is intentional

Those are more important than the application Helm command.

4. Be Careful with the Search Backend

The note also captured a useful reality: getting the search backend right was not completely smooth.

That is worth keeping in the draft because it reflects what actually happens:

  • one chart path did not fit the security expectations cleanly
  • a simpler single-node layout was used for the search tier
  • cluster-local DNS and application connectivity still had to be verified by hand

That is a much more honest story than “install chart, everything works.”

5. Treat Restore and Migration as a Separate Phase

The source note blurred installation and restoration together, which is common in real work logs. For a public article, I would probably separate them.

The useful structure would be:

  1. build the base FusionAuth platform
  2. verify PostgreSQL and search connectivity
  3. only then handle restore or migration of real data

That keeps the post easier to follow and easier to reuse.

Closing Thought

This draft still needs a careful editorial pass before publication, mostly because the original note contains real secrets and internal environment details.

But the technical shape is strong enough to keep: a FusionAuth cluster build is not just an app deployment. It is a dependency and connectivity exercise first.