Basic HTTPS on Kubernetes with Traefik

Back in February of 2018 Google’s Security blog announced that Chrome would be start displaying “not secure” for websites starting in July. In doing so they cemented HTTPS as part of the constantly-rising baseline expectations for modern web developers.

These constantly-rising baseline expectations are written into a new generation of tools like Traefik and Caddy. Both are written in Go and both leverage Let’s Encrypt to automate away the requesting and renewal, and by extension the unexpected expiration, of TLS certificates. Kubernetes is another modern tool aimed at meeting some of the other modern baseline expectations around monitoring, scaling and uptime.

Using Traefik and Kubernetes together is a little fiddly, and getting a working deployment on a cloud provider even more so. The aim here is to show how to use Traefik to get Let’s Encrypt based HTTPS working on the Google Kubernetes Engine.
An obvious prerequisite is to have a domain name, and to point it at a static IP you’ve created.

Let’s start with creating our project:

mike@sleepycat:~$ gcloud projects create --name k8s-https
No project id provided.

Use [k8s-https-212614] as project id (Y/n)?  y

Create in progress for [https://cloudresourcemanager.googleapis.com/v1/projects/k8s-https-212614].
Waiting for [operations/cp.6673958274622567208] to finish...done.

Next lets create a static IP.

mike@sleepycat:~$ gcloud beta compute --project=k8s-https-212614 addresses create k8s-https --region=northamerica-northeast1 --network-tier=PREMIUM
Created [https://www.googleapis.com/compute/beta/projects/k8s-https-212614/regions/northamerica-northeast1/addresses/k8s-https].
mike@sleepycat:~$ gcloud beta compute --project=k8s-https-212614 addresses list
NAME       REGION                   ADDRESS        STATUS
k8s-https  northamerica-northeast1  35.203.65.136  RESERVED

Because I am easily amused, I own the domain actually.works. In my settings for that domain I created an A record pointing at that IP address. When you have things set up correctly, you can verify the DNS part is working with dig

mike@sleepycat:~$ dig it.actually.works

; <<>> DiG 9.13.0 <<>> it.actually.works
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62565
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;it.actually.works.		IN	A

;; ANSWER SECTION:
it.actually.works.	3600	IN	A	35.203.65.136

;; Query time: 188 msec
;; SERVER: 192.168.0.1#53(192.168.0.1)
;; WHEN: Tue Aug 07 23:02:11 EDT 2018
;; MSG SIZE  rcvd: 62

With that squared away, we need to create our Kubernetes cluster. Before we can do that we need to get a little administrative stuff out of the way. First we need to get our billing details and link them to our project.

mike@sleepycat:~$ gcloud beta billing accounts list
ACCOUNT_ID            NAME                OPEN  MASTER_ACCOUNT_ID
0X0X0X-0X0X0X-0X0X0X  My Billing Account  True
mike@sleepycat:~$ gcloud beta billing projects link k8s-https-212614 --billing-account 0X0X0X-0X0X0X-0X0X0X
billingAccountName: billingAccounts/0X0X0X-0X0X0X-0X0X0X
billingEnabled: true
name: projects/k8s-https-212614/billingInfo
projectId: k8s-https-212614

Then we’ll need to enable the Kubernetes engine for this project.

mike@sleepycat:~$ gcloud services enable container.googleapis.com --project k8s-https-212614
Waiting for async operation operations/tmo-acf.74966272-39c8-4b7b-b973-8f7fa4dac4fd to complete...
Operation finished successfully. The following command can describe the Operation details:
 gcloud services operations describe operations/tmo-acf.74966272-39c8-4b7b-b973-8f7fa4dac4fd

Let’s create our cluster. Because both Kubernetes and Google move pretty quickly, it’s good to check the current Kubernetes version for your region with something like gcloud container get-server-config --region "northamerica-northeast1". In my case that shows “1.10.5-gke.3” as the newest so I’ll use that for my cluster. If you are interested in beefier machines explore your options with gcloud compute machine-types list --filter="northamerica-northeast1" but for this I’ll slum it with a f1-micro.

mike@sleepycat:~$ gcloud container --project=k8s-https-212614 clusters create "k8s-https" --zone "northamerica-northeast1-a" --username "admin" --cluster-version "1.10.5-gke.3" --machine-type "f1-micro" --image-type "COS" --disk-type "pd-standard" --disk-size "100" --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "3" --enable-cloud-logging --enable-cloud-monitoring --addons HorizontalPodAutoscaling,HttpLoadBalancing,KubernetesDashboard --enable-autoupgrade --enable-autorepair

Creating cluster k8s-https...done.
Created [https://container.googleapis.com/v1/projects/k8s-https-212614/zones/northamerica-northeast1-a/clusters/k8s-https].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/northamerica-northeast1-a/k8s-https?project=k8s-https-212614
kubeconfig entry generated for k8s-https.
NAME       LOCATION                   MASTER_VERSION  MASTER_IP    MACHINE_TYPE  NODE_VERSION  NUM_NODES  STATUS
k8s-https  northamerica-northeast1-a  1.10.5-gke.3    35.203.64.6  f1-micro      1.10.5-gke.3  3          RUNNING

You will notice that kubectl (which you obviously have installed already) is now configured to access this cluster.
As part of the Traefik setup we are about to do we will need to change some RBAC rules. To do that we will need to create a cluster admin role and load that into our cluster.

mike@sleepycat:~$ cat cluster-admin-rolebinding.yaml 
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: owner-cluster-admin-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: <your_username@your_email_you_use_with_google_cloud.whatever>
---

mike@sleepycat:~$ kubectl apply -f cluster-admin-rolebinding.yaml

With that done we can apply the rest of the config I’ve posted in a snippet here with kubectl apply -f https.yaml.
It’s a fair bit of yaml, but a few things are worth pointing out.

First, we are running a single pod with my helloworld image. It’s just the output of create-react-app that I use for testing stuff.
If you look at the traefik-ingress-service, you will notice we are telling Google we want the service mapped to the static IP we created earlier using loadBalancerIP.

---
apiVersion: v1
kind: Service
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  loadBalancerIP: 35.203.65.136
  ports:
  - name: http
    port: 80
    protocol: TCP
  - name: https
    port: 443
    protocol: TCP
  - name: admin
    port: 8080
    protocol: TCP
  selector:
    k8s-app: traefik-ingress-lb
  type: LoadBalancer
---

When looking at the traefik-ingress-controller itself, it’s worth noting the choice of kind: Deployment instead of kind: DaemonSet. This choice was made for simplicity’s sake (only a single pod will read/write to my certs-claim volume so no Multi-Attach errors), and means that I will have a single pod acting as my ingress controller. Read more about the tradeoffs here.

Here is the traefik-ingress-controller in it’s entirety. It’s a long chunk of code, but I find this helps see everything in context.

Special note about the args being passed to the container; make sure they are strings. You can end up with some pretty baffling errors if you don’t. Other than that, it’s the full set of options to get you TLS certs and automatic redirects to HTTPS.

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    k8s-app: traefik-ingress-lb
  name: traefik-ingress-controller
  namespace: kube-system
spec:
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
    spec:
      containers:
      - args:
        - "--api"
        - "--kubernetes"
        - "--logLevel=DEBUG"
        - "--debug"
        - "--defaultentrypoints=http,https"
        - "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
        - "--entrypoints=Name:https Address::443 TLS"
        - "--acme"
        - "--acme.onhostrule"
        - "--acme.entrypoint=https"
        - "--acme.domains=it.actually.works"
        - "--acme.email=mike@korora.ca"
        - "--acme.storage=/certs/acme.json"
        - "--acme.httpchallenge"
        - "--acme.httpchallenge.entrypoint=http"
        image: traefik:1.7
        name: traefik-ingress-lb
        ports:
        - containerPort: 80
          hostPort: 80
          name: http
        - containerPort: 443
          hostPort: 443
          name: https
        - containerPort: 8080
          hostPort: 8080
          name: admin
        securityContext:
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
        volumeMounts:
        - mountPath: /certs
          name: certs-claim
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      volumes:
      - name: certs-claim
        persistentVolumeClaim:
          claimName: certs-claim

The contents of the snippet should be all you need to get up and running. You should be able to visit your domain and see the reassuring green of the TLS lock in the URL bar.

A TLS certificate from Let's Encrypt

If things aren’t working you can get a sense of what’s up with the following commands:

kubectl get all --all-namespaces
kubectl logs --namespace=kube-system traefik-ingress-controller-...

Where to go from here

As you can see, there is a fair bit going on here. We have DNS, Kubernetes, Traefik and the underlying Google Cloud Platform all interacting and it’s not easy to get a minimal “hello world” style demo going when that is the case. Hopefully this shows enough to give people a jumping off point so they can start refining this into a more robust configuration. The next steps for me will be exploring DaemonSets and storing the acme.json in a way that multiple copies of Traefik can access, maybe a key/value like consul. We’ll see what the next layer of learning brings.

Advertisements

Minimum viable Kubernetes

I remember sitting in the audience at the first Dockercon in 2014 when Google announced Kubernetes and thinking “what kind of a name is that?”. In the intervening years, Kubernetes, or k8s for short, has battled it out with Cattle and Docker swarm and emerged as the last orchestrator standing.

I’ve been watching this happen but have been procrastinating on learning it because from a distance it looks hella complicated. Recently I decided to rip off the bandaid and set myself the challenge of getting a single container running in k8s.

While every major cloud provider is offering k8s, so far Google looks to be the easiest to get started with. So what does it take to get a container running on Google Cloud?

First some assumptions: you’ve installed the gcloud command (I used this) with the alpha commands, and you have a GCP account, and you’ve logged in with gcloud auth login.

If you have that sorted, let’s create a project.

mike@sleepycat:~$ gcloud projects create --name projectfoo
No project id provided.

Use [projectfoo-208401] as project id (Y/n)?  

Create in progress for [https://cloudresourcemanager.googleapis.com/v1/projects/projectfoo-208401].
Waiting for [operations/cp.4790935341316997740] to finish...done.

With a project created we need to enable billing for it, so Google can charge you for the compute resources Kubernetes uses.

                                                                                                                                      
mike@sleepycat:~$ gcloud alpha billing projects link projectfoo-208401 --billing-account 0X0X0X-0X0X0X-0X0X0X
billingAccountName: billingAccounts/0X0X0X-0X0X0X-0X0X0X
billingEnabled: true
name: projects/projectfoo-208401/billingInfo
projectId: projectfoo-208401

Next we need to enable the Kubernetes Engine API for our new project.

mike@sleepycat:~$ gcloud services --project=projectfoo-208401 enable container.googleapis.com
Waiting for async operation operations/tmo-acf.445bb50c-cf7a-4477-831c-371fea91ddf0 to complete...
Operation finished successfully. The following command can describe the Operation details:
 gcloud services operations describe operations/tmo-acf.445bb50c-cf7a-4477-831c-371fea91ddf0

With that done, we are free to fire up a Kubernetes cluster. There is a lot going on here, more than you need, but it’s good to be able to see some of the options available. Probably the only ones to care about initially are the zone and the machine-type.

mike@sleepycat:~$ gcloud beta container --project=projectfoo-208401 clusters create "projectfoo" --zone "northamerica-northeast1-a" --username "admin" --cluster-version "1.8.10-gke.0" --machine-type "f1-micro" --image-type "COS" --disk-type "pd-standard" --disk-size "100" --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "3" --enable-cloud-logging --enable-cloud-monitoring --addons HorizontalPodAutoscaling,HttpLoadBalancing,KubernetesDashboard --enable-autoupgrade --enable-autorepair
This will enable the autorepair feature for nodes. Please see
https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more
information on node autorepairs.

This will enable the autoupgrade feature for nodes. Please see
https://cloud.google.com/kubernetes-engine/docs/node-management for more
information on node autoupgrades.

Creating cluster projectfoo...done.                                                                                                                                                                         
Created [https://container.googleapis.com/v1beta1/projects/projectfoo-208401/zones/northamerica-northeast1-a/clusters/projectfoo].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/northamerica-northeast1-a/projectfoo?project=projectfoo-208401
kubeconfig entry generated for projectfoo.
NAME        LOCATION                   MASTER_VERSION  MASTER_IP     MACHINE_TYPE  NODE_VERSION  NUM_NODES  STATUS
projectfoo  northamerica-northeast1-a  1.8.10-gke.0    35.203.8.163  f1-micro      1.8.10-gke.0  3          RUNNING

With that done we can take a quick peek at what that last command created: a Kubernetes cluster on three f1-micro VMs.

mike@sleepycat:~$ gcloud compute instances --project=projectfoo-208401 list
NAME                                       ZONE                       MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
gke-projectfoo-default-pool-190d2ac3-59hg  northamerica-northeast1-a  f1-micro                   10.162.0.4   35.203.87.122  RUNNING
gke-projectfoo-default-pool-190d2ac3-lbnk  northamerica-northeast1-a  f1-micro                   10.162.0.2   35.203.78.141  RUNNING
gke-projectfoo-default-pool-190d2ac3-pmsw  northamerica-northeast1-a  f1-micro                   10.162.0.3   35.203.91.206  RUNNING

Let’s put those f1-micro‘s to work. We are going to use the kubectl run command to run a simple helloworld container that just has the basic output of create-react-app in it.

mike@sleepycat:~$ kubectl run projectfoo --image mikewilliamson/helloworld --port 3000
deployment "projectfoo" created

The result of that is the helloworld container, running inside a pod, inside a replica set inside a deployment, which of course is running inside a VM on Google Cloud. All that’s needed now is to map the port the container is listening on (3000) to port 80 so we can talk to it from the outside world.

mike@sleepycat:~$ kubectl expose deployment projectfoo --type LoadBalancer --port 80 --target-port 3000
service "projectfoo" exposed

This creates a LoadBalancer service, and eventually we get allocated our own IP.

mike@sleepycat:~$ kubectl get services
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.59.240.1    <none>        443/TCP        3m
projectfoo   LoadBalancer   10.59.245.55   <pending>     80:32184/TCP   34s
mike@sleepycat:~$ kubectl get services
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
kubernetes   ClusterIP      10.59.240.1    <none>           443/TCP        4m
projectfoo   LoadBalancer   10.59.245.55   35.203.123.204   80:32184/TCP   1m

Then we can use our newly allocated IP and talk to our container. The moment of truth!

mike@sleepycat:~$ curl 35.203.123.204
<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width,initial-scale=1,shrink-to-fit=no"><meta name="theme-color" content="#000000"><link rel="manifest" href="/manifest.json"><link rel="shortcut icon" href="/favicon.ico"><title>React App</title><link href="/static/css/main.c17080f1.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div><script type="text/javascript" src="/static/js/main.61911c33.js"></script></body></html>

After you’ve taken a moment to marvel at the layers of abstractions involved here, it’s worth remembering that you probably don’t want this stuff hanging around if you aren’t really using it, otherwise you’re going to regret connecting your billing information.

mike@sleepycat:~$ gcloud container --project projectfoo-208401 clusters delete projectfoo
The following clusters will be deleted.
 - [projectfoo] in [northamerica-northeast1-a]

Do you want to continue (Y/n)?  y

Deleting cluster projectfoo...done.                                                                                                                                                                         
Deleted [https://container.googleapis.com/v1/projects/projectfoo-208401/zones/northamerica-northeast1-a/clusters/projectfoo].
mike@sleepycat:~$ gcloud projects delete projectfoo-208401
Your project will be deleted.

Do you want to continue (Y/n)?  y

Deleted [https://cloudresourcemanager.googleapis.com/v1/projects/projectfoo-208401].

You can undo this operation for a limited period by running:
  $ gcloud projects undelete projectfoo-208401

There is a lot going on here, and since this is new territory, much of it doesn’t mean lots to me yet. What’s exciting to me is finally being able to get a toe-hold on an otherwise pretty intimidating subject.

Having finally started working with it, I have to say both the kubectl and gcloud CLI tools are thoughtfully designed and pretty intuitive, and Google’s done a nice job making a lot of stuff happen in just a few approachable commands. I’m excited to dig in further.