O yeah! Humanoids has a new website 🎉

Kubernetes Guide: DigitalOcean

Ruben Werdmüller
Ruben Werdmüller
Development Team Lead

Infrastructure as Code is rapidly gaining popularity. This principle means that you control the infrastructure of your service from files in the service or directly linked to the service. The advantages of this are that the configuration is made transparent, it can be easily adjusted and the entire infrastructure can be migrated fairly easily to another cloud provider.

At Humanoids we also like to use this principle and we prefer to do so with the help of Kubernetes. In this article we show how you can set up your own Kubernetes infrastructure at DigitalOcean.

Docker, Kubernetes and DigitalOcean

The following techniques are all discussed to realize the cloud infrastructure. For the sake of completeness, we briefly summarize each technique here.

Docker

With Docker we can create an 'image'. This image is created via a Dockerfile. This is a set of instructions that step by step covers all the requirements required to run the service; from using a (small) operating system as a basis, to installing all packages and both building and starting the service.

The image therefore contains all the building blocks and instructions needed to create a service. You can therefore also imagine it as the template for a service. The service built with the template then lives in a Docker container. The service can be started from this container.

Kubernetes

Kubernetes allows us to “orchestrate containers”. It is a system for deployment, automatic scaling up and down and management of containerized apps.

When using Kubernetes, three components are usually defined: a 'deployment', a 'service' and an 'ingress'. These configuration files that are saved ensure that 'pods', the thing in which containers are stored, are containerized in a 'node'. That's a server. And last but not least: a cluster. This is a collection of nodes and also the one where we set up all the previously mentioned components.

DigitalOcean

DigitalOcean is a cloud provider, similar to Amazon Web Services or Microsoft Azure. DigitalOcean is very popular because they offer a lot of 'value for money' and the use of the services is straightforward. This way you can configure a Kubernetes cluster in just a few steps.

Supplies

To get started we need the following:

  • a DigitalOcean account;
  • Homebrew for MacOS users or Chocolatey for Windows for installing the various CLIs;
  • doctl CLI for managing the cluster;
  • kubectl CLI to control Kubernetes.

Set up a Kubernetes Cluster in DigitalOcean

First of all, we will set up a Kubernetes Cluster. We will do this via the terminal. For this we still need an Access token from DigitalOcean. First we log in to DigitalOcean:

doctl auth init --access-token=<ACCESS_TOKEN>

To create our first cluster we use the following in the terminal:

doctl kubernetes cluster create humanoids-k8s --region ams3

We named the cluster humanoids-k8s in this way and used ams3 (Amsterdam) as a region. To locally save our configuration and adjust the context we use the following:

doctl kubernetes cluster kubeconfig save humanoids-k8s

To see the existing nodes in the cluster we use:

kubectl get nodes

Now we have something to deploy to (the cluster on DigitalOcean). Once this is done, we need to make sure we create the files needed to organize everything within the cluster.

Namespace

We provide a namespace so that we can easily develop for different environments. This one is nice and short:

# /kubernetes/namespace.prod.yaml
apiVersion: v1
kind: Namespace
metadata:
name: production

Then apply the Namespace configuration by typing the following in the terminal:

kubectl apply -f ./kubernetes/namespace.prod.yaml -n production

Deployment file

A 'deployment' is an instruction with which Kubernetes will set up our service. We tell it where to find our image so Kubernetes can create containers from it. For the scope of this article, we are going to retrieve the open source Nginx web server image from the Docker registry. This way we can immediately test whether all our Kubernetes settings are correct.

# /kubernetes/deployment.prod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-humanoids
namespace: production
labels:
app: hello-humanoids
spec:
replicas: 1
selector:
matchLabels:
app: hello-humanoids
template:
metadata:
labels:
app: hello-humanoids
spec:
containers:
- image: nginx
name: hello-humanoids
ports:
- containerPort: 80
imagePullPolicy: Always

Not much exciting happening here. The imagePullPolicy is set to 'Always' to ensure that the containers running in production are updated with each new update to the image. Furthermore, each replica creates a new pod. So the Docker containers are stored there.

NB; the container port must always match the port you expose in your Docker image

Apply the deployment configuration:

kubectl apply -f ./kubernetes/deployment.prod.yaml -n production

Service

A 'service' balances the load for pods. It directs users to specific pods. So if a pod is busy and another pod is available, the ingress will direct traffic to that pod. A service can service multiple pods, or just one.

# /kubernetes/service.prod.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: hello-humanoids
name: hello-humanoids
namespace: production
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hello-humanoids

This makes pods accessible to the ingress, after we apply it to our cluster:

kubectl apply -f ./kubernetes/service.prod.yaml -n production

Ingress

An 'ingress' determines how traffic should flow from the outside world to the services. For example, you can set up a front-end and a back-end service, and the ingress will redirect the incoming traffic to the appropriate service.

In order for the Ingress to function, an Nginx Ingress Controller must be created. That is the part that takes care of this routing behind the scenes. Install the Nginx Ingress Controller in our cluster using Helm. Helm is a package manager for Kubernetes services. MacOS users do this via the HomeBrew command brew install helm and in Windows this can be done via the Chocolatey command choco install kubernetes-helm.

Then we configure it and install the controller:

helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install nginx-ingress stable/nginx-ingress --set controller.publishService.enabled=true


Now we can apply our Ingress:

# /kubernetes/ingress.prod.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: hello-humanoids-guide
namespace: production
spec:
rules:
- host: voorbeeld.nl # Vul hier je eigen domeinnaam in
http:
paths:
- backend:
serviceName: hello-humanoids
servicePort: 80 # Port exposed by service
path: /()(.*)

And then apply this configuration again via:

kubectl apply -f ./kubernetes/ingress.prod.yaml -n production

If this goes well, you will see on your DigitalOcean dashboard that a new load balancer has been created. By selecting the load balancer in the dashboard and clicking on the title, you can give it a recognizable name.

If you now request a list of the active clusters via doctl, you will see that the cluster we just created is among them!

doctl kubernetes cluster list

Put your service online

As a final step, a DNS record must be created so that we can link a (sub)domain to the Kubernetes cluster. For this we need to know the IP address of the load balancer. You can find this out by running the following command:

kubectl -n production get ingress

The IP address can be found under the heading “External IP”.

Now that you know the IP address, the DNS record can be added. To do this, go to the management environment of your domain. Add an A record to the DNS settings. Enter the (sub)domain from your ingress configuration and point it to the IP address of the load balancer. After saving it may take a while before the change is implemented, if necessary delete your DNS cache. That's all!

What can Humanoids help you with?

Want to find out what we can help you with?Interested?