In parts 1 and 2 of this series, we’ve coded up a simple .NET Core 2 web application, containing some services, and we’ve dockerised it and its stored on our local docker image library. Cool.
Now, what happens if we want to be able to share that unit of execution with other members of a team, so they can test changes…..we do that using docker repositories.
I wanted to try and simulate a more “secure” method of storing images to be shared within a team, so I created my own repository, I didn’t want to use the public docker hub. I also didn’t want to use a docker repository hosted on the cloud. Thats easy, but costs some money. I wanted to do it on the cheap.
I had a spare raspberry pi hanging around so lets use that as a starter for ten!
Step 1. Install raspbian. Its easy enough to find tutorials to do that on the ‘net.
Step 2. Install docker. Here I used: https://forum.hilscher.com/Thread-Setup-trusted-Docker-registry-on-a-Raspberry-Pi-to-host-netPI-containers.
I’ll go through each of the commands one by one….
2a. Install and setup the docker binaries to run the docker provided registry image:
curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh
2b. Let the “normal” pi user run docker:
sudo usermod -aG docker pi
2c. Obtain a certificate that can protect the docker repo. Now this bit isn’t required, but if we want to integrate with Azure DevOps down the line, we’re going to need to make sure that we’ve got a “secure” repo. I got my certificate from https://sslforfree.com, it’ll expire in 90 days but for the purpose of this tutorial it’ll do nicely. Make sure the “common name” matches with the hostname you’re going to call your registry.
2d. sslforfree, once you’ve completed their application process, they provide the certificates in a zip file. Copy the zip file to your Raspberry Pi and expand the files into /var/lib/docker/certs
2e. Now the magic happens :). Sudo into the root user and run the following:
docker run -d --restart=always --name my-registry -v /var/lib/docker/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/certificate.crt -e REGISTRY_HTTP_TLS_KEY=/certs/private.key -p 50000:443 registry:latest
Lets breakdown this command into its component parts
docker run -d
…Runs the docker in daemon mode (ie. as a service)
--restart=always
…restarts the container after a reboot.
--name my-registry
…calls the container my-registry
-v /var/lib/docker/certs:/certs
…mounts a folder inside the container called certs to a folder on the docker host where we’ve stored the certificates.
-e REGISTRY_HTTP_ADDR=0.0.0.0:443
….sets an environment variable the docker registry image understands to set what IP addresses to bind the http daemon to.
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/certificate.crt
…another environment variable pointing docker at the certificate that will enable ssl
-e REGISTRY_HTTP_TLS_KEY=/certs/private.key
…and the corresponding environment variable pointing docker at the private key of the certificate that will enable ssl
-p 50000:443
…we’ll port forward 50000 on the docker host to 443 of the container
registry:latest
…and finally the image name we’re going to pull down from the
docker hub that this registry will be based on.
Phew.
*If* everything is successful you should now be able to hit the URL for the registry and get some information back….URL for my registry is https://myregistry:50000/v2/_catalog
Response from the site looked something like this:
{"repositories":""}
…this little bit of JSON shows that this repository has no images currently in it.
Once we’ve uploaded a few bits and bobs it will look something more like this:
{"repositories":["monolithsvc","multiservice_addsvc","multiservice_frontend","multiservice_minussvc","multiservice_multiplysvc"]}
So we now have a working repo, what do we do on our docker client to be able to push to it?
Step 3. Make Docker Client aware of our Docker Repo.
I use a Mac but the settings can be found in the same place in Windows Docker-CE. Find the preferences and go to the Daemon section. Here we can add our custom registry like this:
Step 4. Tag our local image that we built with our docker file with the hostname and image name
So if you remember in part 2, we ran the following command:
docker build -t monolithsvc .
…that builds an image called “monolithsvc” and stores it locally. If we want to then push this to a remote repository that’s at host “my registry” for example we could add an additional tag after the image has been built by doing:
docker image tag monolithsvc:latest myregistry:50000/monolithsvc:latest
Step 5. If that’s all worked then we do a docker push….
docker push myregistry:50000/monolithsvc:latest
if all is successful we should see all the layers uploading to our registry!
..and now in theory any of my imaginary team should now be able to pull from that registry (if we set it in their preferences too) by doing
docker pull myregistry:50000/monolithsvc:latest
…and that is all I have to say about that for now. 😀
Next time, we start integrating with Azure DevOps and get the test driven development moving….