Private Registry
It turns out we will want to both write our own microservice and store a big C build image which avoids some, frankly, ludicrous failure to cache anything issues.
Overview
A registry is just the archive of all those container images we want to use: k8s.gcr.io, docker.io, whatever.
A private registry is no different just a) local and b) private. In the latter case we can add all of our half-baked, poorly maintained, semi-forgotten hacks away from prying eyes.
The awkwardness is in getting anything and everything to trust this new source of images. There's two parts to that, as well: we have to secure access with TLS, meaning we have to tell all possible users about its (self-signed) Certificate; and we have to be able to authenticate to it (as in a username/password pair).
This is broadly following Varum Kumar G's thoughts in https://medium.com/swlh/deploy-your-private-docker-registry-as-a-pod-in-kubernetes-f6a489bf0180 but shaped slightly according to need.
I think there's one small gotcha in that the registry name that we choose, say, docker-registry, must be used consistently across the piece (certainly, in the DNS and the CN in the certificate). Bear that in mind.
Service's Certs and Auth
We need to stash some data for our registry in files for later use. We'll use /registry/{certs,auth} the same as Varum and start our usual list of variables.
# reg_svc_name=docker-registry # certs_secret=certs-secret # auth_secret=auth-secret # reg_cred_secret=reg-cred-secret # reg_dir=/registry # mkdir -p ${reg_dir}/{certs,auth} # cd ${reg_dir} # tls_key=certs/tls.key # tls_crt=certs/tls.crt # openssl req -x509 -newkey rsa:4096 --days 365 -nodes -sha256 \ -keyout ${tls_key} -out ${tls_crt} -subj "/CN=${reg_svc_name}" \ -addext "subjectAltName = DNS:${reg_svc_name}"
Next, we need to generate an htpasswd -B (ie. bcrypt rather than crypt) password file. The nominal command is:
# sudo docker run --rm --entrypoint htpasswd registry -Bbn user pass
but registry (the registry image) releases after 2.7.0 have dropped the apache2-utils packages because of a CVE.
However, do we really want to be running docker just to get htpasswd? Nominally htpasswd is in apache2-utils but Fedora is suggesting httpd-tools:
# dnf install httpd-tools
Back on track:
# auth_htpasswd=auth/htpasswd # htpasswd -Bbn ${reg_user} ${reg_pass} > ${auth_htpasswd}
for some ${reg_user} and ${reg_pass}. This is the account you will be authorising your container runtimes to use (docker, containerd etc.)
Secrets
Kubernetes likes a good Secret although quite a lot of the time they are simply used as a fancy way of mapping files into containers.
Here we need a Secret for the TLS Certificate and the htpasswd for the registry image to use and a docker-registry Secret (the Secret's Type is coincidental to our Service's Name) for the container runtime to use to access the registry:
# kubectl create secret tls ${certs_secret} \ --cert=${reg_dir}/${tls_crt} --key=${reg_dir}/${tls_key} # kubectl create secret generic ${auth_secret} --from-file=${reg_dir}/${auth_htpasswd} # kubectl create secret docker-registry ${reg_cred_secret} \ --docker-server=${reg_svc_name}:5000 \ --docker-username=${reg_user} --docker-password=${reg_pass}
Secret Stuff
Let's look at little deeper at ${auth_secret} because it's going to be used as a mountpoint later.
I'm using myuser and mypass for the ${reg_user} and ${reg_pass} in case you want to verify locally.
# kubectl get secret/auth-secret -o yaml apiVersion: v1 data: htpasswd: bXl1c2VyOiQyeSQwNSROdHRWOUVwOHNjZnN5SWg0djZ3LzF1bVl3aVhSdUN2M0JTOUVuY0xkdDQ5UW9XOHhyUjg4aQoK kind: Secret ...
Base64 encoding again:
# base64 -d bXl1c2VyOiQyeSQwNSROdHRWOUVwOHNjZnN5SWg0djZ3LzF1bVl3aVhSdUN2M0JTOUVuY0xkdDQ5UW9XOHhyUjg4aQoK ^D myuser:$2y$05$NttV9Ep8scfsyIh4v6w/1umYwiXRuCv3BS9EncLdt49QoW8xrR88i
which looks very much like /registry/auth/htpasswd
The docker-registry secret is differently similar:
# kubectl get secret/r-c-s -o yaml apiVersion: v1 data: .dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXItcmVnaXN0cnk6NTAwMCI6eyJ1c2VybmFtZSI6Im15dXNlciIsInBhc3N3b3JkIjoibXlwYXNzIiwiYXV0aCI6ImJYbDFjMlZ5T20xNWNHRnpjdz09In19fQ== kind: Secret ...
where:
# base64 -d eyJhdXRocyI6eyJkb2NrZXItcmVnaXN0cnk6NTAwMCI6eyJ1c2VybmFtZSI6Im15dXNlciIsInBhc3N3b3JkIjoibXlwYXNzIiwiYXV0aCI6ImJYbDFjMlZ5T20xNWNHRnpjdz09In19fQ== ^D {"auths":{"docker-registry:5000":{"username":"myuser","password":"mypass","auth":"bXl1c2VyOm15cGFzcw=="}}}
where:
# base64 -d bXl1c2VyOm15cGFzcw== ^D myuser:mypass
Fair enough.
Storage
Varum uses /tmp/registry for the registry's backing store. Obviously that raises a few alarm bells about persistency, not just from a node restarting but when a node is restarted then Kubernetes will have restarted the Pod on another node in the meanwhile.
Of interest, an empirical study suggests that a Pod can be restarted and will have retained knowledge about the images it has been given but as the backing store has gone the access fails. sad faces
So, an obvious step is to have all worker nodes mount the same shared file store trusting that the system is capable enough of ensuring that only one instance of the registry image is ever running across all your nodes.
Hence we might have:
# mountpoint=/mnt/registry
and have managed etc/fstab on the worker nodes separately.
That now brings us to Kubernetes and PersistentVolumes and PersistentVolumeClaims where, not unreasonably, the latter is staking a claim to some amount of the former. How multiple claims into the same volume are handled is beyond the scope of this article.
# repo_pv=docker-repo-pv # repo_pvc=docker-repo-pvc # kubectl create -f - <<EOF apiVersion: v1 kind: PersistentVolume metadata: name: ${repo_pv} spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce hostPath: path: ${mountpoint} EOF
and
kubectl create -f - <<EOF apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ${repo_pvc} spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi EOF
Registry Pod
Now we can dive straight in and create a registry Pod...
Warning
Except Varum's example has a problem for us. He creates a Pod, which is all well and good until we restart worker nodes when our Pod disappears and never comes back. Oh dear.
We really need to describe a Deployment:
# kubectl create -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: docker-registry labels: app: registry spec: replicas: 1 selector: matchLabels: app: registry template: metadata: labels: app: registry spec: containers: - name: registry image: registry:2.6.2 volumeMounts: - name: repo-vol mountPath: "${opt_reg_vol_mnt}" - name: certs-vol mountPath: "/certs" readOnly: true - name: auth-vol mountPath: "/auth" readOnly: true env: - name: REGISTRY_AUTH value: "htpasswd" - name: REGISTRY_AUTH_HTPASSWD_REALM value: "Registry Realm" - name: REGISTRY_AUTH_HTPASSWD_PATH value: "/${auth_htpasswd}" - name: REGISTRY_HTTP_TLS_CERTIFICATE value: "/${tls_crt}" - name: REGISTRY_HTTP_TLS_KEY value: "/${tls_key}" volumes: - name: repo-vol persistentVolumeClaim: claimName: ${repo_pvc} - name: certs-vol secret: secretName: ${certs_secret} - name: auth-vol secret: secretName: ${auth_secret} EOF
which is more or less the same thing but with the spec.containers indented (four spaces) inside the spec.template. The outer spec defining the number of replicas (one!).
There's lots to take in here:
- the image we're using is registry:2.6.2 -- I guess I cut'n'paste and could (should?) have used latest
- volumeMounts are a two-step motion:
- we give an in-container mountPath and reference a later volume
- the volume can reference:
- a persistentVolumeClaim
- a Secret which, clearly, Kubernetes knows how to manipulate into the form of a file in the filesystem.
Registry Service
We can wrapper a Service around our Pod:
# kubectl create -f - <<EOF apiVersion: v1 kind: Service metadata: name: ${reg_svc_name} spec: selector: app: registry ports: - port: 5000 targetPort: 5000 EOF
ExternalIP
Our registry is visible to Kubernetes but not to the outside world, which we are about to want it to be.
I only have one master node and so I'm happy for it to do the limited amount of hosting/rerouting of registry traffic for me. You may want to investigate something more robust.
We need to bodge in an ExternalIP:
# ips=( $(some command to figure out my node's IP) ) # kubectl patch svc ${reg_svc_name} -p "{\"spec\": {\"type\": \"ClusterIP\", \"externalIPs\":[\"${ips[*]}\"]}}"
General Access
Now the fun starts. We have a Kubernetes-accessible registry on some random Service IP address (or our ExternalIP address) using a TLS certificate no-one has seen before and some authentication account we just made up.
DNS
From outside the Kubernetes cluster we need to have a DNS Resource Record for ${reg_svc_name} pointing to the ExternalIP.
Inside the Kubernetes cluster we need to have an entry in /etc/hosts for the Service's IP address (under the CLUSTER-IP column) for ${reg_svc_name} on each node.
Container Runtimes
We have (will have) two container runtimes to handle. In the Kubernetes cluster we are using containerd. Soon, though, we'll be creating an image or two on a totally separate host outside of the cluster using docker.
In both cases we'll need the Certificate we just created in ${reg_dir}/${tls_crt}.
Docker
Docker is much simpler and the documentation actually correct. For some (new) registry that we need to trust we need to put its Certificate in /etc/docker/certs.d/{registry-name[:port]}/ca.crt.
The important bit to note, here, is that the directory name should match the name/port combination you will be referring to the registry as:
host# mkdir -p /etc/docker/certs.d/${reg_svc_name}:5000 host# cd /etc/docker/certs.d/${reg_svc_name}:5000 host# cat <<EOF > ca.crt {contents of ${reg_dir}/${tls_crt} } EOF
Subsequently you can:
# docker login ${reg_svc_name}:5000 -u ${reg_user}
containerd
containerd's documentation is a bit more cryptic and I had to resort to strace'ing the binary to find out it worked much in the same way as Docker, above.
One-time Change
There is a one-off change we need to make to convince containerd to go looking for new certificates by editing the config file and setting config_path to /etc/containerd/certs.d:
# vi +/config_path /etc/containerd/config.toml [plugins."io.containerd.grpc.v1.cri".registry] config_path = "/etc/containerd/certs.d"
Do that on all the Kubernetes nodes.
For Each Registry
(In principle you are only doing this once but when testing...)
Here we create a similar directory to Docker but will populate it with two files:
# mkdir -p /etc/containerd/certs.d/${reg_svc_name}:5000 # cat <<EOF > /etc/containerd/certs.d/${reg_svc_name}:5000/hosts.toml server = "https://${reg_svc_name}:5000" [host."https://${reg_svc_name}:5000"] capabilities = ["pull", "resolve", "push"] ca = "${tls_crt##*/}" # ca can be absolute path /etc/containerd/certs.d/${reg_svc_name}:5000/${tls_crt##*/} EOF host# cat <<EOF > /etc/containerd/certs.d/${reg_svc_name}:5000/${tls_crt##*/} {contents of ${reg_dir}/${tls_crt} } EOF
Do that on all the Kubernetes nodes.
If you recreate the registry but don't change anything else then you only need to overwrite the tls.crt file.
The problem with the documentation, fwiw, is that it is slightly lost in the (unexplained) miracle of using mirrors for well known registries but fails to mention the :port part of the directory name.
Usage
You should be good to go with Docker, now.
containerd will still have some issues. It may well trust your Certificate but we haven't told it how to authenticate.
If you are running one-off commands then you might well use:
# crictl pull --creds ${reg_user}:${reg_pass} ${reg_svc_name}:5000/{image}:{tag}
and the more cumbersome:
# ctr image pull --host-dir=/etc/containerd/certs.d --user ${reg_user}:${reg_pass} ${reg_svc_name}:5000/{image}:{tag}
because ctr doesn't know to use /etc/containerd/certs.d.
Neither of which particularly cover themselves in glory as your password is available for all to see.
In practice you might not use crictl or ctr -- other than testing -- which raises the question, what does containerd use for access to the repository?
The (obvious) answer is that it uses the credentials that you supply. In other words, when you intend to use an image from this repository you need to pass some imagePullSecrets:
spec: imagePullSecrets: name: ${reg_cred_secret}
the docker-registry Secret credentials we created earlier.
You might even try:
$ kubectl run my-pod --image=${reg_svc_name}:5000/{image}:{tag} \ --overrides='{"apiVersion":"v1","spec":{"imagePullSecrets":[{"name":"${reg_cred_secret}"}]}}'
Document Actions