This note came from a sparse work log, but the operational pattern is still worth keeping.
The starting point was a Proxmox web interface exposed directly on the usual management port. The goal was to put it behind cloudflared and give it a cleaner HTTPS endpoint without publishing the raw management URL directly.
One important detail that was missing from the first draft: this was not just a standalone cloudflared process on a single host. The setup was tied to a Kubernetes environment where internal services were being exposed outward through Cloudflare Tunnels.
The Before and After
The original note was basically just this shape:
| |
and the target state was this:
| |
That is a small change on paper, but it usually makes the setup easier to use and easier to hide behind a more deliberate edge configuration.
Why Do This
For internal admin interfaces, I generally prefer a cleaner entry point over handing around a direct host-and-port URL.
A proxied hostname gives you a few practical advantages:
- a more stable external URL
- certificate handling at the Cloudflare edge
- less need to expose the raw management address directly
- a cleaner path for access control and future changes
What the Work Log Tells Me
The note mentioned:
| |
That tells me the actual work here was not really about Proxmox itself. It was about using a Cloudflare Tunnel to map the Proxmox web UI to a friendlier public hostname.
In this case, the tunnel lived in a Kubernetes cluster and was being used as part of a broader pattern for exposing internal services through Cloudflare. That matters, because the operational question becomes less “how do I reverse proxy Proxmox?” and more “how do I publish a service from this environment without exposing the raw endpoint directly?”
Kubernetes Context
The more accurate framing for this note is:
cloudflaredwas part of the Kubernetes environment- the public hostname was handled at the Cloudflare edge
- the internal target was a service reachable from that cluster context
If I were writing the deployment shape more explicitly today, I would describe it as a tunnel that forwards a public hostname to an internal service endpoint, often through a Kubernetes Service object or another origin reachable from the cluster.
That generalized shape looks more like this:
| |
paired with a cloudflared ingress definition that maps a public hostname to that internal service target.
In a generalized form, the tunnel side looks like this:
| |
The exact cloudflared configuration will vary depending on whether you run it as a Kubernetes deployment, sidecar, or another managed tunnel pattern. I am keeping the example generalized here because the original note did not include the final full config, and I do not want to imply details that were not actually recorded.
Practical Checks
If I were rebuilding this from the note today, the checks I would care about are:
- verify that Proxmox is still reachable locally on port
8006 - verify that the Kubernetes service target is reachable from the
cloudflaredworkload - verify that the tunnel maps the public hostname to the correct origin service
- verify how TLS is handled between
cloudflaredand the origin - confirm that the public hostname resolves and presents a valid certificate
Closing Thought
Sometimes the best notes are not long. They just capture the before, the after, and the tool that made the change happen.
This is one of those notes: take a raw Proxmox endpoint, put it behind cloudflared, and make the operational URL cleaner and easier to manage.