This draft comes from a cluster task where I needed a workload to pull an image from a private registry. The container runtime on the nodes was not really the hard part. The practical issue was making sure Kubernetes itself had the right credentials in the right namespace and that the deployment referenced them correctly.

The Goal

The goal was to deploy an application that used an image stored in a private registry:

  • create the registry credential as a Kubernetes secret
  • make sure the target namespace existed
  • attach the secret to the workload through imagePullSecrets
  • validate that the deployment could actually pull and start

1. Create the Registry Secret

The quickest path was creating a Docker registry secret directly:

1
2
3
4
5
6
kubectl create secret docker-registry private-registry-credentials \
  --docker-server=registry.example.com \
  --docker-username=ci-bot \
  --docker-password='<registry-token>' \
  --docker-email=[email protected] \
  -n app-test

Kubernetes stores this as a kubernetes.io/dockerconfigjson secret.

2. Make Sure the Namespace Exists

One of the early mistakes in my original note was trying to apply the deployment before the namespace existed. Kubernetes rejected the manifest immediately.

Create the namespace first:

1
kubectl create namespace app-test

Then apply the workload:

1
kubectl apply -f deployment.yaml -n app-test

3. Reference the Secret in the Deployment

The key part is putting imagePullSecrets under the pod spec, not under the container itself.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-private-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sample-private-app
  template:
    metadata:
      labels:
        app: sample-private-app
    spec:
      imagePullSecrets:
        - name: private-registry-credentials
      containers:
        - name: sample-private-app
          image: registry.example.com/team/app/sample-private-app:latest
          ports:
            - containerPort: 9007

That placement matters. If the secret is missing, attached to the wrong namespace, or placed in the wrong part of the manifest, the pod will stay stuck on image pull failures.

4. Validate the Deployment

I like to verify from several angles:

1
2
3
kubectl get deployment sample-private-app -n app-test
kubectl describe deployment/sample-private-app -n app-test
kubectl get pods -n app-test

If the image still does not pull, the pod events usually point to the real issue:

  • missing namespace
  • missing secret
  • bad credentials
  • wrong image path or tag
  • deployment not referencing the secret at all

What This Note Was Really About

The original note was not just about creating the secret. It was about validating the whole path from credentials to running workload.

In real platform work, that means checking more than the YAML:

  • does the namespace exist
  • does the secret exist in that same namespace
  • does the deployment refer to the right secret name
  • can the registry actually serve the image requested

Why This Still Matters with Containerd

Even when the cluster nodes use containerd, the Kubernetes-facing workflow for private image pulls still centers on registry credentials and pod configuration. That is why this remains a useful pattern to document: it is less about the node runtime itself and more about getting the Kubernetes integration right.

Closing Thought

This is a good example of the kind of note I like to turn into a blog draft. It starts as a small environment-specific task, but underneath it is a reusable pattern that other Kubernetes admins run into all the time.