The goals

I already presented a way of continually publishing a Hugo blog through different means: from GitLab pages, to Netlify deployments, to automatic pushes to your own server somewhere, the article walked through different setups of blogging automation.

From that point, consider this post as an addendum, an added step for those who are using (or wish to use) docker containers to publish their blog.

By packaging the complete blog as a docker container, we remove the externality that something like bind mounts would need for the old way of pushing content. More importantly, we can simply roll back to older versions since we are encapsulating everything necessary into individual containers.

We can also deploy our blog pretty much anywhere that can work with containers and we remove the necessity of shipping data back and forth through rsync. And, for those that care about the difference, this workflow allows a much easier change between using pushes or pulls for the continuous deployment.

Lastly, this also allows easier replication across many servers, for example through docker swarm or K8s — setting up replication will not be part of this post.

As evident, the process is also a bit more involved than the previous post’s efforts, and ideally, to jump right in, you should already be aware of what containers are, as well as the rough concepts they work with.

Building the container

The container build itself is easy. Thanks to the static nature of site generators like Hugo, all we need is a web server to serve the content of our page and the page content itself.

Both can be solved by using a pre-made server container (I will use nginx in the following, since I like it as a static server) and by adding the results of the Hugo build process into the correct directory.

Since I will make use of reverse proxy servers for the final deployment, as will most docker deployments, I will not change any of the nginx options in the container itself. If you have other specific requirements, feel free to set those as the blog container is being built.

If we make use of the previous post’s Hugo build pipeline, the complete Dockerfile for our container is short:

FROM nginx:alpine
MAINTAINER My Name <and@em.ail>

COPY public /usr/share/nginx/html

This should be available somewhere on our repository as a Dockerfile. For my purposes it’s easiest to just put it at the root directory. Careful though if you put the Dockerfile somewhere else, you will have to change the working directories or commands in some of the following steps. At its most basic, we then just need to add the following to the build script we already have:

  script:
    - hugo -d public -b "${BLOG_URL}"
    - docker build -t <my-image-tag> .

You can replace <my-image-tag> by the name you want to give your image, or, as I did, automatically generated by GitLab’s pipeline process. At the top of the pipeline, in the variables section, we can add: IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME as our image tag. This will create a tag name from the (repository) name of the registry we are building for, and the current reference we are on (branch).1 We just need to change the build script command accordingly: - docker build -t $IMAGE_TAG .

Publishing the container

We now have a way for our pipeline to build our blog container.2 Next we need to upload it to some kind of container registry, so that we can access it from outside the build server itself.

Different registries work, you can make use of docker hub or quay.io. Since we are already on GitLab, and they supply their own — per-repository — registries, I will make use of this.

The GitLab documentation pages describe the process to upload containers to registries in detail. We will use a simple version of it, so that the build and publish step for our container ultimately looks like the following:

build:production:
  stage: build
  image: docker:latest
  services:
    - docker:dind
  before_script: *get_and_verify_hugo
  retry: 2
  script:
    - hugo -d public -b "${BLOG_URL}"
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
    - docker build -t $IMAGE_TAG .
    - docker push $IMAGE_TAG
  artifacts:
    paths:
      - public
  only:
    - master

This will build everything just as before, but now make use of the docker-in-docker service to upload our container image to our registry after building.

Deploying the container

On our server, we can now use docker to deploy the image we created. To check if the container runs as intended, we just do docker run -ti --rm registry.gitlab.com/username/reponame:refname, replacing username, repository name, and ref name with the correct values.

If your repository is open-source, your container registry should also be open access and it should work as is. If your repository is not open, you will first need to sign into your registry with docker login registry.gitlab.com/username/reponame before you can run the container.

Now, we just want to automate those steps: ideally, we create a separate system user on the server, with permissions restricted to those strictly necessary. You can also set up ssh keys in GitLab with which you can access the registry, so your access password does not lie around in plain-text.

The actual deployment step does not differ too much from the deployment in my previous post yet:

deploy:production:
  stage: deploy
  dependencies:
    - build:production
  before_script:
    - "which ssh-agent || ( apk update && apk add openssh-client )"
    - eval $(ssh-agent -s)
    - echo "${SSH_PRIVATE_KEY}" | tr -d '\r' | ssh-add - > /dev/null
    - mkdir -p ~/.ssh
    - echo "${SSH_KNOWN_HOSTS}" > ~/.ssh/known_hosts
  script:
    - ssh "${SSH_HOST}" ./deploy.sh
  after_script:
    - echo "Deployed to Production at ${BLOG_URL}"
  only:
    - master

We ssh onto the target server again (careful to remember setting up the necessary variables in repository settings) and then execute a deploy.sh script on the machine. Again, I restricted the ssh identity of the deployment step to just be able to execute this script and do nothing else as a slight additional security measure (see target server configuration).

The deploy script itself can now conform to any needs on the target system. If you just need a simple docker run command, you can use that. You can add more fanciness like logging a new version to the system journal, or even sending you an e-mail or message whenever pushing a deployment.

In my case, I use docker swarm to run all the containers on my server. In the deploy file all that needs doing is:

docker -v
docker stack deploy -c docker-stack.yml blog --with-registry-auth

It logs the current docker version and deploys the blog docker-stack as my blog stack. The ----with-registry-auth option is necessary if your swarm needs to log in to a private container registry. The docker stack itself once again refers to the GitLab registry for its container:

version: '3'

services:
  app:
    image: registry.gitlab.com/marty-oehme/blog:master
    volumes:
      - logs:/var/log/nginx

volumes:
  logs:

Done!

Looking ahead

In the future, I would like to extend the deployment process to switch to webhooks instead of the current ssh process. By using a webhook notification, many services can be informed at once, and the script to run would be changeable without changing the interface of notification.

We would also arrive at even more of a pull model than we have at the moment, with the build pipeline still pushing the command (to pull the new container) to the server.

Just like containers for the Hugo build step, it would provide another neat abstraction layer between the notification and its reaction:

├ container build process (gitlab)
├ deployment to container registry (gitlab)
├ update notification (webhook)
└ deployment on server (script)

The deployment on server should ideally be independent from the 2-step build process at GitLab itself. Right now, it still knows too much about the implementation (by directly invoking a script on the target machine) instead of just an abstract interface.

That is for a future day though, and for now the deployment to a containerized blog works pretty nicely indeed.


  1. Doing this, it becomes even pretty easy to have different images built for different branches. You can, for example, build separate images for the master branch and the develop branch, or individual posts. Since the development version of this blog is hosted on GitLab pages itself I don’t make use of it, but it’s easy to achieve. ↩︎

  2. In reality, what we have is the image that future containers base their content on — but, semantics. ↩︎