Deploy a docker registry on bare metal (debian) kubernetes

I previously installed a Kubernetes cluster on 3 “bare metal” debian machines (VPS OVH).
Now I would like to be able to deploy some applications on this cluster, but in order to do that I will need to use a docker registry, public, secured and protected.

kubernetes_logo

  • Public: accessible from a public url (registry.demos.ovh).
  • Secured: accessible over HTTPS : https://registry.demos.ovh.
  • Protected: only known users can access (push and pull) the registry.

This post will not describe how to install kubernetes on debian (maybe I should write another post for that).

I assume that you have access to a k8s-equipped cluster, with 1 controller and 2 workers. I also assume that you have already installed and configured Ingress Nginx on your cluster, and that you have root (or sudo) access to all of the nodes on the cluster.

Install and configure storage (NFS server)

In order to use an NFS Server, we need to both install and configure it (duh). I did it directly from the master but you can do it from any server accessible from every node in the k8s cluster.

So let’s start the installation :

Installing the NFS server

First install packages : nfs-kernel-server for the server and nfs-common for nfs client.

sudo apt-get install nfs-kernel-server nfs-common

Then I recommend you create a folder dedicated to the nfs server, for example : /nfs

sudo mkdir /nfs
sudo chown -R nobody.nogroup /nfs
sudo chmod -R 755 /nfs

Note, that the chmod command should be enough for most of your cases.

Then now, you need to edit the nfs export file : /etc/exports and add the following line :

/nfs 1.1.1.119(rw,insecure,no_root_squash,no_wdelay) 1.1.1.120(rw,insecure,no_root_squash,no_wdelay) 1.1.1.121(rw,insecure,no_root_squash,no_wdelay)

Then reload exportfs :

sudo exportfs -a

Now the server should be accessible from every node.

Installing the NFS client

Now in order to be able to use the nfs server we need to install an nfs client on all the nodes in the cluster.

sudo apt-get install nfs-common

Do note that on the worker nodes we only need to install the client package.

Now in order to test it we can simply write a file:

mkdir /mnt/nfs
mount -t nfs 1.1.1.119:/nfs /mnt/nfs/
cd /mnt/nfs
echo "mimiz.fr" > hello.txt

Then on the server, you should be able to see the file /nfs/hello.txt.

To find out how to install an nfs server I read these couple of articles :

Creating the directory for docker registry images

In order to store the docker registry images, I created a directory under the /nfs directory :

docker_logo

sudo mkdir /nfs/docker-registry
sudo chown -R nobody.nogroup /nfs
sudo chmod -R 755 /nfs

So now you can store data on your NFS server.

Create a specific namespace for your registry

You can create namespace using the kubectl command

kubectl create namespace registry

or using yaml

apiVersion: v1
kind: Namespace
metadata:
  name: registry

and

kubectl apply -f ./00-registry-namespace.yaml

Then set it as the default namespace :

kubectl config set-context $(kubectl config current-context) --namespace=registry

Create a PersistentVolume and PersistentVolumeClaim for your docker registry

Ok now we will prepare storage for our registry, first we need to create a PersistentVolume file :

apiVersion: v1
kind: PersistentVolume
metadata:
  name: docker-registry-pv
  namespace: registry
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteMany
  nfs:
    # FIXME: use the right IP
    server: 1.1.1.119
    path: "/nfs/docker-registry"

‼️ you need to change server IP with the one of your server

Then create the PersistentVolumeClaim file

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: docker-registry-pvc
  namespace: registry
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 30Gi

Then create both PersistentVolume and PersistentVolumeClaim :

kubectl create -f ./01-registry-pv.yaml
kubectl create -f ./02-registry-pvc.yaml

You should now be able to see your PersistentVolume when executing :

kubectl get pv
NAME                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                          STORAGECLASS   REASON    AGE
docker-registry-pv   30Gi       RWX            Retain           Bound     registry/docker-registry-pvc                            8m

You can see that docker-registry-pv is Bound to Claim registry/docker-registry-pvc

Install and deploy a docker registry on the cluster

Creating the Deployment Config

In order to deploy a docker registry using our PersistentVolume we will create a Deployment, which will itself create a ReplicaSet and a Pod.

So here is the Deployment file :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: docker-registry
  namespace: registry
spec:
  replicas: 2
  selector:
    matchLabels:
      app: docker-registry
  template:
    metadata:
      labels:
        app: docker-registry
    spec:
      containers:
        - name: docker-registry
          image: registry:2.6.2
          env:
            - name: REGISTRY_HTTP_SECRET
              value: azerty
            - name: REGISTRY_HTTP_ADDR
              value: ":5000"
            - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
              value: "/var/lib/registry"
          ports:
          - name: http
            containerPort: 5000
          volumeMounts:
          - name: image-store
            mountPath: "/var/lib/registry"
      volumes:
      - name: image-store
        persistentVolumeClaim:
          claimName: docker-registry-pvc

Regarding the yaml file, you can see that we configured 2 replicas in order to keep the system alive in case of failure from one of the pods. In each one of the pods we will have only one container named docker-registry which is based on the registry image stored on the docker hub. And we use version 2.6.2 (latest at the time I wrote this article).

⚠️ Remember latest is NOT a version !

Create deployment in the cluster :

kubectl apply -f ./03-registry-deployment.yaml

After a while our pods should be up and running :

kubectl get pods
NAME                               READY     STATUS    RESTARTS   AGE
docker-registry-849c56666b-j746p   1/1       Running   0          55s
docker-registry-849c56666b-jlbtb   1/1       Running   0          43s

Running the command k exec -it docker-registry-849c56666b-j746p sh will allow you to enter a shell console in the selected pod.

Create the Service

Now our pods are running but we can not use our registry yet … Let’s create a Service for that! This service should be accessible from outside the cluster, and as we are not hosted on a cloud provider (actually we are but can’t modify its external Load Balancer), we will use Ingress Nginx to access it from outside.

First let’s create our service :

kind: Service
apiVersion: v1
metadata:
  name: docker-registry
  namespace: registry
  labels:
    app: docker-registry
spec:
  selector:
    app: docker-registry
  ports:
  - name: http
    port: 5000
kubectl apply -f ./04-registry-svc.yaml 

Check if the service is running

kubectl get svc
NAME              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
docker-registry   ClusterIP   10.102.6.228   <none>        5000/TCP   5s

As you can see the type is set to ClusterIP, which means our service is only accessible from within the cluster.

Let’s give it a try. First connect to one pod :

kubectl run -it --rm busybox --image=busybox sh

This should (after waiting a few seconds) open a shell on a busybox container, so now let’s try to access the registry

/ # wget http://10.102.6.228:5000/v2/_catalog -O -
Connecting to 10.102.6.228:5000 (10.102.6.228:5000)
{"repositories":[]}
-                    100% |**********************************************************************************************|    33   0:00:00 ETA
/ #

As you can see another pod can access our registry via it’s internal IP (10.102.6.228). Actually, you can also access it via the service name:

/ # wget http://docker-registry:5000/v2/_catalog -O -
Connecting to docker-registry:5000 (10.102.6.228:5000)
{"repositories":["remi/alpine"]}
-                    100% |**********************************************************************************************|    33   0:00:00 ETA
/ #

So our service is now configured !

But we can not access it from oustside the cluster, let’s do it !

Make the registry publically available

Creating an Ingress resource for your Registry

First you need to have configured Ingress Nginx (you can follow the instructions in the link, they work pretty well).

Once you’ve done it you should have an ingress-nginx namespace

kubectl get namespace

Now you can create the ingress resource in your cluster :

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: registry-ingress
  namespace: registry
  labels:
    version: "1.0"
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: registry.demos.ovh
    http:
      paths:
      - path: /
        backend:
          serviceName: docker-registry
          servicePort: 5000
kubectl apply -f ./05-registry-ingress.yaml

then you need to make the domain registry.demos.ovh to route to one of your worker nodes (everyone except the controller) IP address. You can do this by modifying your /etc/hosts file.

So now if you try to access the url : http://registry.demos.ovh … It should not work ! This is both because we did not use a LoadBalancer and because the Ingress Nginx is listening on a specific port.

Type the following command to check on which port Ingress Nginx is listening :

kubectl get svc -n ingress-nginx

This should output something like this :

NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
default-http-backend   ClusterIP   10.106.122.30   <none>        80/TCP                       11d
ingress-nginx          NodePort    10.99.248.231   <none>        80:32353/TCP,443:30249/TCP   11d

So you can see that Ingress Nginx is listening on ports :

  • 32353 for http requests
  • 30249 for https requests

Knowing this we can then try:

curl http://registry.demos.ovh:32353/v2/_catalog

This should output the list of repositories

Ok, now our registry can be connected from outside the cluster with a pretty domain name, but we don’t want to type the port each time we want to connect to the registry.

In order to remove that port we will need to install a Proxy in front of our cluster, in a cloud environment like GKE or AWS, you can use the Service Type LoadBalancer. Since we are on “Bare Metal” server we will instead install HaProxy on the master (or any other host that can have access to our nodes).

Installing HAProxy

haproxy_logo

First we need to install the service :

sudo apt-get install haproxy

Then you need to configure it by editing / creating the file : /etc/haproxy/haproxy.cfg

global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin
	stats timeout 30s
	user haproxy
	group haproxy
	daemon

defaults
	log	global
	mode	tcp
	option tcplog
    	timeout client  1m
	timeout server 1m
	timeout connect 10s

frontend http-frontend
    bind *:80
    default_backend http-backend

backend http-backend
   balance roundrobin
   #  32353 is the "NodePort" of Ingress HTTP Service
   server worker1 1.1.1.120:32353 check
   server worker2 1.1.1.121:32353 check

frontend https-frontend
    bind *:443
    default_backend https-backend

backend https-backend
   balance roundrobin
   # 30249 is the "NodePort" of Ingress HTTPS Service
   server workers1 1.1.1.120:30249 check
   server workers2 1.1.1.121:30249 check

I chose to use HAProxy in TCP mode, in order to delegate everything regarding http or https negociation to the Ingress Controller.

As you can see, http-backend points to the nodes ingress’s http port, and https-backend is pointing to the ingress’s https port.

Restart HAProxy

sudo systemctl restart haproxy

You then need to make sure registry.demos.ovh is now pointing to your HAProxy server (master node in my example).

And try :

curl http://registry.demos.ovh/v2/_catalog

This should output the list of repositories

Secure the registry with a valid certificate

Great ! Now our registry is available on http://registry.demos.ovh. This a good start, but we would prefer it to be available over https. If you try to access it that way right now like this:

curl https://registry.demos.ovh/v2/_catalog

You will have a certificate error. This is normal, because we never defined any certificates anywhere.

Let see how to do that.

After a quick search online you should find a solution called cert-manager. As a side note, you might also find another one called kube-lego which is no longer maintained (they encourage you to migrate to cert-manager).

You can easily install it following the notes described on the documentation. I installed it using Helm, the Kubernetes package manager :

helm install \
    --name cert-manager \
    --namespace kube-system \
    stable/cert-manager

Once cert-manager is installed, you can start using it.

We need to create an Issuer, it will allow you to create a certificate :

apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
  name: letsencrypt-prod
  namespace: registry
spec:
  acme:
    # The ACME server URL
    server: https://acme-v01.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: myemail@demos.ovh
    # Name of the secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsncrypt-prod-tls
    # Enable HTTP01 validations
    http01: {}
kubectl apply -f ./registry-issuer.yaml

Now that it’s been created, let’s try to see the letsncrypt-prod-tls secrets

kubectl get secrets
NAME                  TYPE                                  DATA      AGE
default-token-zz26l   kubernetes.io/service-account-token   3         2h
letsncrypt-prod-tls   Opaque                                1         50s

So now we can create a certificate against Let’s Encrypt, and store it in a kubernetes Secrets.

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: cert-registry-tls
  namespace: registry
spec:
  secretName: registry-tls
  issuerRef:
    name: letsencrypt-prod
  commonName: registry.rgo.ovh
  dnsNames:
  - registry.rgo.ovh
  acme:
    config:
    - http01:
        ingress: registry-ingress
      domains:
      - registry.rgo.ovh

Apply it on the cluster,

kubectl apply -f ./registry-cert.yaml

Then, after a while you can check events on the certificate cert-registry-tls :

kubectl describe certificate cert-registry-tls

At bottom end of the output you should see something like this :

Events:
  Type     Reason                 Age              From                     Message
  ----     ------                 ----             ----                     -------
  Warning  ErrorCheckCertificate  2m               cert-manager-controller  Error checking existing TLS certificate: secret "registry-tls" not found
  Normal   PrepareCertificate     2m               cert-manager-controller  Preparing certificate with issuer
  Normal   PresentChallenge       2m               cert-manager-controller  Presenting http-01 challenge for domain registry.demos.ovh
  Normal   SelfCheck              2m               cert-manager-controller  Performing self-check for domain registry.demos.ovh
  Normal   ObtainAuthorization    10s              cert-manager-controller  Obtained authorization for domain registry.demos.ovh
  Normal   IssueCertificate       10s              cert-manager-controller  Issuing certificate...
  Normal   CeritifcateIssued      7s               cert-manager-controller  Certificated issued successfully
  Normal   RenewalScheduled       7s (x2 over 7s)  cert-manager-controller  Certificate scheduled for renewal in 1438 hours

Now that we have a certificate we need to edit the ingress configuration to add the SSL information.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: registry-ingress
  namespace: registry
  labels:
    version: "1.0"
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
    - secretName: registry-tls
      hosts:
        - registry.demos.ovh
  rules:
  - host: registry.demos.ovh
    http:
      paths:
      - path: /
        backend:
          serviceName: docker-registry
          servicePort: 5000

We can apply the new configuration as is as we did not change the name, so it will only update it :

kubectl apply -f ./registry-ingress-ssl.yaml

So now if we try to call it again …

curl https://registry.demos.ovh/v2/_catalog

It should be able to display a list of repositories. (or at least empy data, but no errors).

Try to push to your registry :

docker pull busybox:1.28.3
docker tag busybox:1.28.3 registry.demos.ovh/busybox:1.28.3
docker push registry.demos.ovh/busybox:1.28.3

This should work !

Protect the registry with httpasswd

Ok, so now we have a public registry that can receive data from anyone. This is cool but I would prefer to allow push (or pull) only to trusted users.

Looking at the Docker Registry Documentation we see that we can easily protect our registry with a login / password, using htpassword.

In order to do this first we need to create an htpasswd file. You can either do it with the command htpasswd from your system or you can use hack like this :

docker run --entrypoint htpasswd registry:2.6.2 -Bbn USERNAME PASSWORD > htpasswd

But this display the password in cleartext …

htpasswd -B -c htpasswd remi

Flag -B is for crypt password using bcrypt.

So now we have a file htpasswd

So we can create a Kubernetes secret with this file as input :

kubectl create secret generic registry-auth-secret --from-file=htpasswd=htpasswd

Finally we can edit the registry-deployment file to add the configuration for the auth parameters

apiVersion: apps/v1
kind: Deployment
metadata:
  name: docker-registry
  namespace: registry
spec:
  replicas: 2
  selector:
    matchLabels:
      app: docker-registry
  template:
    metadata:
      labels:
        app: docker-registry
    spec:
      containers:
        - name: docker-registry
          image: registry:2.6.2
          env:
            - name: REGISTRY_HTTP_SECRET
              value: azerty
            - name: REGISTRY_HTTP_ADDR
              value: ":5000"
            - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
              value: "/var/lib/registry"
            - name: REGISTRY_AUTH_HTPASSWD_REALM
              value: basic_realm
            - name: REGISTRY_AUTH_HTPASSWD_PATH
              value: /auth/htpasswd
          ports:
          - name: http
            containerPort: 5000
          volumeMounts:
          - name: image-store
            mountPath: "/var/lib/registry"
          - name: auth-dir
            mountPath: /auth
      volumes:
      - name: image-store
        persistentVolumeClaim:
          claimName: docker-registry-pvc
      - name: auth-dir
        secret:
          secretName: registry-auth-secret

And apply it …

kubectl k apply -f registry-deployment-auth.yaml

Your docker-registry pods should be restarted with the new configuration. So let’s try it by pushing a new image to our registry :

docker pull alpine:3.6
docker tag alpine:3.6 registry.demos.ovh/alpine:3.6
docker push registry.demos.ovh/alpine:3.6

The last command throw an error: no basic auth credentials. So now we need to login in order to push, using the login/password we specified in the htpasswd file creation:

docker login registry.demos.ovh

Now if we try to push again :

docker push registry.demos.ovh/alpine:3.6

It works !

Feel free to comment !

Thanks

Big thanks to Christian Alonso Chavez Ley for his help !