This note came from a sparse work log, but the operational pattern is still worth keeping.

The starting point was a Proxmox web interface exposed directly on the usual management port. The goal was to put it behind cloudflared and give it a cleaner HTTPS endpoint without publishing the raw management URL directly.

One important detail that was missing from the first draft: this was not just a standalone cloudflared process on a single host. The setup was tied to a Kubernetes environment where internal services were being exposed outward through Cloudflare Tunnels.

The Before and After

The original note was basically just this shape:

1
https://<source-hostname>:8006

and the target state was this:

1
https://<public-proxy-hostname>

That is a small change on paper, but it usually makes the setup easier to use and easier to hide behind a more deliberate edge configuration.

Why Do This

For internal admin interfaces, I generally prefer a cleaner entry point over handing around a direct host-and-port URL.

A proxied hostname gives you a few practical advantages:

  • a more stable external URL
  • certificate handling at the Cloudflare edge
  • less need to expose the raw management address directly
  • a cleaner path for access control and future changes

What the Work Log Tells Me

The note mentioned:

1
cloudflared

That tells me the actual work here was not really about Proxmox itself. It was about using a Cloudflare Tunnel to map the Proxmox web UI to a friendlier public hostname.

In this case, the tunnel lived in a Kubernetes cluster and was being used as part of a broader pattern for exposing internal services through Cloudflare. That matters, because the operational question becomes less “how do I reverse proxy Proxmox?” and more “how do I publish a service from this environment without exposing the raw endpoint directly?”

Kubernetes Context

The more accurate framing for this note is:

  • cloudflared was part of the Kubernetes environment
  • the public hostname was handled at the Cloudflare edge
  • the internal target was a service reachable from that cluster context

If I were writing the deployment shape more explicitly today, I would describe it as a tunnel that forwards a public hostname to an internal service endpoint, often through a Kubernetes Service object or another origin reachable from the cluster.

That generalized shape looks more like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: v1
kind: Service
metadata:
  name: proxmox-web
spec:
  ports:
    - name: https
      port: 8006
      targetPort: 8006
  selector:
    app: proxmox-web

paired with a cloudflared ingress definition that maps a public hostname to that internal service target.

In a generalized form, the tunnel side looks like this:

1
2
3
4
5
6
ingress:
  - hostname: proxmox.example.com
    service: https://proxmox-web.default.svc.cluster.local:8006
    originRequest:
      noTLSVerify: true
  - service: http_status:404

The exact cloudflared configuration will vary depending on whether you run it as a Kubernetes deployment, sidecar, or another managed tunnel pattern. I am keeping the example generalized here because the original note did not include the final full config, and I do not want to imply details that were not actually recorded.

Practical Checks

If I were rebuilding this from the note today, the checks I would care about are:

  • verify that Proxmox is still reachable locally on port 8006
  • verify that the Kubernetes service target is reachable from the cloudflared workload
  • verify that the tunnel maps the public hostname to the correct origin service
  • verify how TLS is handled between cloudflared and the origin
  • confirm that the public hostname resolves and presents a valid certificate

Closing Thought

Sometimes the best notes are not long. They just capture the before, the after, and the tool that made the change happen.

This is one of those notes: take a raw Proxmox endpoint, put it behind cloudflared, and make the operational URL cleaner and easier to manage.