Skinning A Cat: Five Ways To Setup Kubernetes

So by now you’ve probably heard about the new hotness that is microservices and the platform for delivering it Kubernetes. Put simply Kubernetes is a platform for managing container orchestration (i.e running the container on multiple compute nodes and ensuring all required resources are present).

Historically, Kubernetes has been a bear to install. Back in the olden days you were stuck manually configuring individual daemons before you got to a usable state. Nowadays though, there are a wide variety of ways to get at Kubernetes and instantiating a cluster is actually about as easy as establishing a LAMP or MEAN web stack. Meaning there’s still effort, but it’s pretty trivial compared to the effort that’s going towards the actual non-Kubernetes work.

In this post I’m going to walk you getting a basic Kubernetes cluster running up to the point where kubectl get nodes returns a list of nodes all of which are in a Ready state and we’re able to deploy a vanilla nginx deployment and verify that the default loading page shows up in our browsers. Each “method” section should be self-sufficient and make no reference to the others so please pick the one you’re most interested in rather than reading the whole thing (or do read the whole thing, it’s a free country and I’m not your dad).


Alright, to start off with let’s have a mental model for what Kubernetes actually is. At its core, server-side Kubernetes is composed of three parts:

  • kube-apiserver: A REST API backed by etcd for persistent storage. This is where all the configuration for Kubernetes goes and 90% of what you deal with on Kubernetes is purely configuration.
  • kube-controller-manager: A control loop for taking the current state of the cluster as described by the API server and changing the rest of the configuration in such a way that it will eventually meet that state. When you create a new deployment, the controller manager spawns a process that goes through the work of actually creating all the container Pod objects that need to exist. When a node goes away, the controller is the part that notices this and creates a new pod in the same style as the deployment.
  • kube-scheduler: Actually assigns pods to nodes. Conceptually it’s relatively simple compared to the controllers but makes up for it by being highly responsive to policies (such as quality of service) it finds in the API server.

Client-side Kubernetes consists of two basic components:

  • kubelet: which is the main kubernetes client. Listens to the API and interacts with the container runtime to actually perform on the kubernetes worker node the actions that the servers have determined need to happen. This involves actually implementing resource constraints, mounting volumes, pod networking, performing health checks etc.
  • kube-proxy: secondary kubernetes client listens to the API and creates the real-world implementation details for connecting to the exposed resources running on Kubernetes. It implements this through such means as spawning workers to listen on ports and creating iptables rules to route traffic to the correct locations.

That’s an incredibly rudimentary overview of Kubernetes but it’s useful information to have so that you have a good sense of what the sections below are actually accomplishing.

Method #1: Personal Development With minikube

minikube is the Kubernetes deployment type aimed at individual developers and those looking to learn Kubernetes on their own. For these people, just having a single node they can deploy a service to is enough to get their head around things. Luckily this is pretty easy to do with minikube which will connect to the hypervisor on your desktop system and create a VM called “minikube” on which an entire instance of Kubernetes will run.


I’m going to assume that we’re going to have the minikube VM running via KVM. VirtualBox and HyperV are also possibilities but I’m just going to explain one way of doing it here.

To get this running:

  • Ensure packages required by minikube are installed: apt-get install -y libvirt-clients libvirt-daemon-system qemu-kvm
  • Ensure Docker Machine’s KVM driver (you don’t need Docker Machine itself) is available in $PATH:
[root@workstation ~]# wget -O /usr/local/bin/docker-machine-driver-kvm
Saving to: ‘/usr/local/bin/docker-machine-driver-kvm’

/usr/local/bin/docker-machine-driver-kvm 100%[======================================================================================================>] 11.34M 7.22MB/s in 1.6s

2018-08-09 15:37:02 (7.22 MB/s) - ‘/usr/local/bin/docker-machine-driver-kvm’ saved [11889064/11889064]

[root@workstation ~]# chmod 0700 /usr/local/bin/docker-machine-driver-kvm 
[root@workstation ~]#
  • Once the software prerequisites are out of the way we can install minikube itself which just makes an executable called minikube available in $PATH:
[root@workstation ~]# wget 
Saving to: ‘minikube_0.28-2.deb’ 

minikube_0.28-2.deb 100%[======================================================================================================>] 6.71M 6.31MB/s in 1.1s 

2018-08-09 14:48:53 (6.31 MB/s) - ‘minikube_0.28-2.deb’ saved [7032166/7032166] 

 [root@workstation ~]# dpkg -i minikube_0.28-2.deb 
Selecting previously unselected package minikube. 
(Reading database ... 252915 files and directories currently installed.) 
Preparing to unpack minikube_0.28-2.deb ... 
Unpacking minikube (0.28-2) ... 
Setting up minikube (0.28-2) ...

Please note that by the time you read this the URL for this might have changed, so please get the URL you use to download the .deb file from the current releases page.

  • At this point we can instruct minikube to bootstrap the cluster. Since this is the first time we’re starting minikube we need to tell it what kind of virtualization we’re using with --vm-driver:
[root@workstation ~]# minikube start --vm-driver kvm 
kubectl could not be found on your path. kubectl is a requirement for using minikube 
To install kubectl, please run the following: 

curl -Lo kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/ 

To disable this message, run the following: 

minikube config set WantKubectlDownloadMsg false 
Starting local Kubernetes v1.10.0 cluster... 
Starting VM... 
WARNING: The kvm driver is now deprecated and support for it will be removed in a future release. 
Please consider switching to the kvm2 driver, which is intended to replace the kvm driver. 
See for more information. 
To disable this message, run [minikube config set WantShowDriverDeprecationNotification false] 
Downloading Minikube ISO 
160.27 MB / 160.27 MB [============================================] 100.00% 0s 
Getting VM IP address... 
Moving files into cluster... 
Downloading kubeadm v1.10.0 
Downloading kubelet v1.10.0 
Finished Downloading kubelet v1.10.0 
Finished Downloading kubeadm v1.10.0 
Setting up certs... 
Connecting to cluster... 
Setting up kubeconfig... 
Starting cluster components... 
Kubectl is now configured to use the cluster. 
Loading cached images from config file.

You’ll notice that minikube told us that we didn’t have kubectl installed. Though not functionally required, new users won’t be able to interface with Kubernetes without it so you should run the command they give you to install it:

[root@workstation ~]# curl -Lo kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/ 
% Total % Received % Xferd Average Speed Time Time Time Current 
Dload Upload Total Spent Left Speed 
100 51.7M 100 51.7M 0 0 6718k 0 0:00:07 0:00:07 --:--:-- 9068k 
[root@workstation ~]#

At this point, if you have no errors, kubectl get nodes should successfully return a list of running nodes:

[root@workstation ~]# kubectl get nodes 
minikube  Ready   master  13m  v1.10.0

If the master node is in a Ready state then that indicates your installation completed successfully.


OK so now that we have both the master and kubelet processes running in that minikube VM, let’s get a basic web server running and accessible:

[root@workstation ~]# kubectl run --image nginx nginx --port=80
deployment.apps "nginx" created
[root@workstation ~]# kubectl expose deployment nginx --type=NodePort --name=nginx
service "nginx" exposed
[root@workstation ~]# kubectl describe service nginx | grep NodePort:
NodePort: <unset> 31498/TCP
[root@workstation ~]# kubectl describe node minikube | grep InternalIP
[root@workstation ~]#

Breaking each command down:

  • The kubectl run instructs Kubernetes to create a new deployment called nginx using a docker image also called nginx which will be pulled down from DockerHub if it doesn’t already exist.
  • The kubectl expose instructs Kubernetes to expose the known application port (the --port 80 in the previous command) as a Kubernetes service of type NodePort.
  • The NodePort type of service will allocate a random available port on the worker node (in this case the minikube VM) to the deployment we craeted. We find out what port it allocated by running kubectl describe service and grep’ing for the NodePort: field.
  • Finally, we need to know what IP address our master node was allocated. To do this we run kubectl describe node on the only node we have at the moment and filter for the InternalIP: field.

So putting it together if I visit get the nginx landing page indicating that my browser is able to connect to the nginx instance running on minikube. In my case it indeed worked.


In my case, when I was setting up the example to do the above, I was using KVM for virtualization but had VirtualBox binaries still installed. Whenever I tried to start minikube with minikube start --vm-driver kvm it would produce errors such as:

E0809 14:55:56.534263 21364 start.go:174] Error starting host: Error getting state for host: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path. 
E0809 14:55:56.534627 21364 start.go:180] Error starting host: Error getting state for host: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path.

In this case, manually specifying kvm for virtualization wasn’t enough since it could find the VirtualBox utilities. I had to uninstall all VirtualBox-related binaries and then delete the local minikube config with rm -rf ~/.minkube in order for the commands above to work as expected.

In general, this is a good MO when setting up minikube. A lot of stuff gets cached locally and even if you fix the underlying issue, you may still be running into errors. Since you’re setting it up, just delete all local configuration and let it regenerate to get around any invalid cache values.

Method #2: Cluster Instantiation With kubeadm


Of all the options in this guide, this is definitely the longest way to install Kubernetes but it’s pretty entry-level since it takes advantage of OS-level knowledge you probably already have. For this scenario, let’s assume you have five Ubuntu systems: two soon-to-be-master Nodes, and three soon-to-be-worker nodes. Each system has 10GB of storage, a single CPU core and 1GB of memory. Obviously these aren’t production systems but they suffice for the task of getting Kubernetes running.

Getting the systems ready

The means of preparing the systems for Kubernetes is the same regardless of whether they’re going to be a master or a worker.

First thing first, disable any swap you have on any of the nodes with swapoff then remove it from fstab as well so it doesn’t come back. The logic behind Kubernetes not allowing swap is complex but essentially it boils down to some of its logic behind making performance guarantees breaking when some pages may be swapped whereas others may have it in physical memory.

Finally we need to get additional repositories configured. Each system should have https enabled for its source repositories with apt-get update && apt-get install -y apt-transport-https curl and the key added for the Kubernetes repository trusted by running curl -s | apt-key add - and finally the repo configured by adding the following at /etc/apt/sources.list.d/kubernetes.list:

deb kubernetes-xenial main

Download the new repo’s metadata with apt-get update. Once all the nodes have been prepared we can start the actual process of getting Kubernetes running.

Installing kubeadm and bootstrapping the Cluster

Now that each node has the repos configured and the OS made ready, let’s bootstrap the new cluster on kube-master01. To do that we install the required Kubernetes software from the repository we configured in the previous section by issuing a apt-get install -y kubeadm command.

Once Kubernetes is now living on your system you can bootstrap the cluster with the kubeadm init command like in the following command output:

root@kube-master01:~# kubeadm init                                                                                                                                                    [15/1882]
[init] using Kubernetes version: v1.11.2                                                                                                                                                       
[preflight] running pre-flight checks                                                                                                                                                          
I0810 22:05:20.541146    5128 kernel_validator.go:81] Validating kernel version                                                                                                                
I0810 22:05:20.548430    5128 kernel_validator.go:96] Validating kernel config                                                                                                                 
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03                             
[preflight/images] Pulling images required for setting up a Kubernetes cluster                                                                                                                 
[preflight/images] This might take a minute or two, depending on the speed of your internet connection                                                                                         
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'                                                                                           
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"                                                                                             
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"                                                                                                                 
[preflight] Activating the kubelet service                                                                                                                                                     
[certificates] Generated ca certificate and key.                                                                                                                                               
[certificates] Generated apiserver certificate and key.                                                                                                                                        
[certificates] apiserver serving cert is signed for DNS names [kube-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [ 192.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kube-master01 localhost] and IPs [ ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kube-master01 localhost] and IPs [ ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 64.006026 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node kube-master01 as master by adding the label "''"
[markmaster] Marking the node kube-master01 as master by adding the taints []
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-master01" as an annotation
[bootstraptoken] using token: wasuzv.ifhzvwmfy4l6k6hk
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token wasuzv.ifhzvwmfy4l6k6hk --discovery-token-ca-cert-hash sha256:40540d7dcbc0c936e8749c119be1a56d145f9c92de71ab3070db7c84c8ab1664

That’s a lot of text output but I thought it was important to see what a successful bootstrap looked like. As stated in the output above, this process creates a file at /etc/kubernetes/admin.conf with the YAML configuration containing the user certificates for authenticate to the API. To use it copy it to ~/.kube/config for every user you wish to be able to administer Kubernetes on every single node in the cluster.

For now you can ignore the “pod network” message since we’ll deal with the overlay network in a bit and skip ahead to registering the nodes to this master. To do that copy the kubeadm join command it gives you at the end, this is the command that will join the workers.

OK enough jibber jabber, let’s register those worker nodes.

Enrolling the Worker Nodes

OK so we actually now technically have a “cluster” but without any worker nodes to assign work to or an overlay network to communicate over, it’s in a dysfunctional and unusable state. Let’s fix that by installing the Kubernetes client components using the kubeadm join command that our bootstrap gave us on each node:

root@kube-worker01:~# apt-get install -y kubeadm
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
cri-tools kubernetes-cni socat
The following NEW packages will be installed:
cri-tools kubeadm kubectl kubelet kubernetes-cni socat
Created symlink /etc/systemd/system/ → /lib/systemd/system/kubelet.service.
Setting up kubectl (1.11.2-00) ...
Processing triggers for man-db (2.8.3-2) ...
Setting up kubeadm (1.11.2-00) ...
root@kube-worker01:~# kubeadm join --token wasuzv.ifhzvwmfy4l6k6hk --discovery-token-ca-cert-hash sha256:40540d7dcbc0c936e8749c119be1a56d145f9c92de71ab3070db7c84c8ab1664
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0811 16:28:53.762525 5585 kernel_validator.go:81] Validating kernel version
I0811 16:28:53.764354 5585 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server ""
[discovery] Created cluster-info discovery client, requesting info from ""
[discovery] Requesting info from "" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server ""
[discovery] Successfully established connection with API Server ""
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-worker01" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
root@kube-worker01:~# mkdir .kube
root@kube-worker01:~# cp ~jad87/admin.conf .kube/config
root@kube-worker01:~# kubectl get nodes
kube-master01 NotReady master 18h v1.11.2
kube-worker01 NotReady <none> 37s v1.11.2

I’ve only shown an abbreviated output for the first node but the other nodes should follow the same workflow:

  • Installing kubelet (the main client, includes kube-proxy) and then kubectl and kubeadm commands so we can easily communicate with the master. These last two are functionally required for Kubernetes per se but they are required for the workflow I’m showing you here.
  • We use kubeadm join to join this node to the cluster
    • The first argument is just the IP address and port the API server is listening on. This communication is all sent over HTTPS for security.
    • We ensure our initial requests include a valid token to prove to the API server that this client server is authorized to join to the cluster (as opposed to just some rando wanting to join to destabilize the cluster).
    • Finally we have the certificate hash which is used to verify the server certificate. This verifies the master to the client (as opposed to the previous step of authenticating the client to the API server). This isn’t strictly required to join but it prevents a false “master” being presented to the worker node and then either capturing the token for the previous attack or sending false pods to the worker node for things like crypto mining or launching other attacks.

Once all your nodes have been registered your output for kubectl get nodes should look something similar to this:

root@kube-worker01:~# kubectl get nodes
kube-master01 NotReady  master  18h  v1.11.2
kube-worker01 NotReady  <none>  9m   v1.11.2
kube-worker02 NotReady  <none>  8m   v1.11.2
kube-worker03 NotReady  <none>  6m   v1.11.2

At this point, the only thing stopping our cluster from going live is the lack of an overlay network.

Configuring the Overlay Network

Developed originally at Google, the earliest iterations of Kubernetes were written with their networking in mind. This causes it to break on pretty much anybody who isn’t Google. To work around this issue (as well solve other problems like encryption) overlay networks were created to service this need.

There are several types of overlay networks available (such as Calico or Flannel) but I’ll be using Weave here. To do this just log onto the master as whichever user you’ve given that admin.conf administrative config to (in their ~/.kube/config) and create the supporting DaemonSet for Weave:

root@kube-master01:~# kubectl apply -f "$(kubectl version | base64 | tr -d '\n')"
serviceaccount/weave-net created created created created created
daemonset.extensions/weave-net created

Breaking down the above:

  • We use kubectl apply to create all supporting service accounts and DaemonSets for Weave networking
    • We use kubectl apply instead of kubectl create so that any future versions of the YAML file will only override the listed attributes rather than trying to create new objects with the given attributes.
    • Even though the weave process is in a container, the DaemonSets will have appropriate levels of access for creating a VPN tunnel to all the other nodes in the cluster.
    • We apply directly from A URL but it’s also possible to download the URL above and edit the YAML file directly before applying it. In fact this is required when you want to set the WEAVE_PASSWORD environmental variable to enable network encryption.

If your installation of weave is successful you should start seeing nodes cutting over to the Ready state indicating that they’re ready to accept pods:

root@kube-master01:~# kubectl get nodes
kube-master01 Ready  master  23h  v1.11.2
kube-worker01 Ready  <none>  5h   v1.11.2
kube-worker02 Ready  <none>  5h   v1.11.2
kube-worker03 Ready  <none>  5h   v1.11.2

If you see output similar to the above, congratulations: Your Kubernetes cluster is now fully functional and read to accept work.


OK so now that we’re ready to accept work, let’s do it. These commands should create a default nginx service:

root@kube-master01:~# kubectl run --image nginx nginx --port 80
deployment.apps/nginx created
root@kube-master01:~# kubectl get services nginx
nginx NodePort <none>      80:32188/TCP  29s

At which point, using my web browser to navigate to port 32188 on any node (for instance http://kube-master01:32188 or http://kube-worker02:32188) causes a familiar nginx landing page to pop up indicating that it connected to the web browser operating in a container.


  • If you need to join a new node after the original bootstrap token has expired (as indicated by kubeadm token list output) then you can generate a new bootstrap token with  the kubeadm command like so: kubeadm token create --description 'creating new token' --ttl 1h --usages authentication,signing --groups system:bootstrappers:kubeadm:default-node-token which will spit out a new token to use (limited to the specified 1 hour).
  • If you need to get a status description of your Weave overlay, just issue a curl http://localhost:6784/status on any node.
  • If you need to institute encryption with Weave, either edit the YAML file before doing the kubeadm apply above or issue a kubectl edit daemonsets --namespace kube-system weave-net command to be presented with a temporary YAML file to edit instead. Add an environmental variable called WEAVE_PASSWORD to the weave (not weave-npc) container set to whatever you want the password to be set to.
    • In the case of editing the existing DaemonSet, the update will be pushed out in a rolling manner so expect the nodes to go offline one-by-one as encryption is enabled for it.
    • After successfully deploying the change all nodes should show Encryption: enabled in its Weave status page.

Method #3: Cluster Instantiation With kops


kops is a tool written in golang and is a tool that is actually part of the official Kubernetes repository and allows an administrator to quickly and easily spin new clusters up in a “cloud native” fashion.

For the previous two sections, you installed on already-configured and already-provisioned nodes. For kops the nodes will be configured upon invocation. In this way when the need to provision a new Kubernetes cluster arises, you can quickly spin one up. Though it requires some setup time (mostly getting API and SSH keys in order) you’ll find that going from “nothing running” to “an nginx server running inside of a k8s pod” is incredibly faster, sometimes happening as quickly as 10-15 minutes.

For this example, I’m going to use AWS since that’s the cloud provider that kops has the best support for but it already has at least Alpha-level support for Google Cloud and DigitalOcean. It’s possible that by the time you read this it’ll be production quality.

Getting AWS Ready

You need three main things out of AWS:

  1. An S3 bucket for kops to throw its cluster configuration into.
  2. An SSH key present on the local system and added to the AWS web ui.
  3. A user that has access to the aforementioned S3 bucket, associated with known access keys, and that has the following permissions in AWS: AmazonEC2FullAccess, AmazonS3FullAccess, IAMFullAccess, and AmazonVPCFullAccess
    1. Additionally, if it has AmazonRoute53FullAccess you can do non-Gossip based Kubernetes, but the DNS presented here is Gossip-based so this permission is optional for following this guide.
    2. You can generate new access keys in the IAM console underneath Users then selecting the target user and underneath the Security Credentials tab there’s an option for generating a new key for this user.

Once you have the above, execute the following to ensure kops has access to the information for connecting to EC2 and S3:

export KOPS_STATE_STORE="s3://rascaldevio-kops.k8s.local"

You can run this in the command line or alternatively put it into your .bash_profile if you plan on working with kops a lot.

Installing Required Software

So there are two programs you need for managing the Kubernetes clusters the kops way: kops and kubectl itself. Since neither has unusual dependencies, both can just be installed directly using wget to download the executables:

# wget -O kops$(curl -s | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
# wget -O kubectl$(curl -s
# chmod +x {kops,kubectl}
# sudo mv {kops,kubectl} /usr/local/bin

Alternatively, you may wish to configure repositories your trust rather than the above. The above was just faster to show.

Once your environment has been setup and the binaries are in place ready to be executed, you can finally use kops.

Bootstrapping the Cluster

The preceding may have seemed like a lot to do, but it’s certainly a lot faster than manually installing Kubernetes using the more traditional approach outline in the “Method #2” section.

For your day-to-day usage of kops though, the steps are as easy as instructing kops to generate a new cluster configuration and place it in the S3 bucket located at $KOPS_STATE_STORE (configured above):

# kops create cluster \
--networking weave \
--zones us-east-1b \
--name rascaldevio-kops.k8s.local

and then tell kops to make that configuration active on EC2:

# kops update cluster --yes rascaldevio-kops.k8s.local

and then you just wait until the kops update command above completes. After about 5-6 minutes (YMMV though) you’ll have a fully functional Kubernetes cluster.

The second command is fairly obvious so I’ll skip that. Going back to the first command, though, it breaks down like this:

  • We use create cluster to generate a new cluster rather than delete or edit one.
  • We manually set the network overlay to use Weave (as in all other methods) rather than the default kubenet overlay. This is optional if you have no preference, I just prefer Weave.
  • We tell kops the specific zone (rather than just the general us-east-1 region) we want to deploy to. In the case of multiple cloud providers kops may use the structure of the zone name to infer the cloud provider you’re targeting (otherwise you can use --cloud to manually set it).
  • Finally we give the cluster a globally unique name. As long as the name ends in k8s.local then kops will understand to not attempt any sort of DNS management and instead use gossip-based DNS to allow the nodes to find each other.
  • If we wanted more nodes than three we could change the default by adding --node-count <count> to the command line options.

After the kops update command finishes, you should give the system the aforementioned 5-6 minutes to complete the boot up of the VM’s and initialize Kubernetes for the first time. After which kubectl get nodes should start returning the two worker nodes and the one master that kops generated for you:

[me@workstation ~]$ kubectl get nodes
NAME                          STATUS  ROLES   AGE  VERSION
ip-172-20-39-225.ec2.internal Ready   master  3m   v1.9.8
ip-172-20-54-186.ec2.internal Ready   node    2m   v1.9.8
ip-172-20-60-222.ec2.internal Ready   node    2m   v1.9.8

Huzzah! We have a new Kubernetes cluster to play around with.


OK so we have a new cluster, let’s verify it works. The last thing our kops update command did was set our current kubectl context to the new cluster so we can go straight to issuing commands to run stuff on the new cluster:

[me@workstation ~]$ kubectl run --image nginx --port 80 nginx
deployment.apps/nginx created
[me@workstation ~]$ kubectl expose deployment --type=LoadBalancer nginx
service/nginx exposed

The above is simply me:

  • Creating a new deployment from an upstream (DockerHub, IIRC) image called nginx also called nginx and manually setting the application port at 80
  • We then expose that deployment (i.e create an service for it) of type LoadBalancer
    • The services of type LoadBalancer have logic internal to them that enables them to automatically configure the Load Balancer instances on EC2.

After waiting a few minutes after the “Ensured load balancer” message appears in the event section of kubectl describe service nginx (to allow DNS caches to expire) we should be able to open the nginx landing page in our browser using the LoadBalancer Ingress field of describe service:

[me@workstation ~]$ kubectl describe service nginx | grep "LoadBalancer Ingress"
LoadBalancer Ingress:

Which, when I loaded the above, works. Yay.


  • Given that kops itself is an automated process that provisions from scratch, it’s kind of hard to do something wrong with it. Usually issues with kops on AWS boil down to either using a command line option incorrectly, or having access level issues with the user you’re trying to authenticate as.
  • If you attempt a kubectl get nodes and get only Unable to connect to the server: EOF back, this is usually due to jumping the gun after provisioning the cluster and you should wait five minutes or so before attempting to run it again.

Method #4: Cluster Instantiation With kubespray


kubespray is an interesting tool that actually leverages an existing skillset (Ansible configuration management) for spinning up Kubernetes clusters. With minimal tweaking you can install a wide variety of cluster configurations. It’s nowhere near as fast as kops but has the added benefit of working with any hosting provider. As long as ansible-playbook is able to establish an ssh session and get to root then you’re good.

Getting the Systems Ready

For the target VM’s I have three Ubuntu 16.04 LTS systems running with 2GB of RAM and two CPU cores. One (kube-master01) will house both the persistent etcd database as well as the Kubernetes server components (which kubespray packages together by using hyperkube) while the other two (kube-worker01 and kube-worker02) will be the workload nodes. You will need to ensure passwordless logins from the provisioning via public keys before beginning. Additionally, each of the three nodes needs a current version of the python interpreter present.

For the provisioning system, you need the same basic software requirements as to run any sort of ansible-playbook command. If in doubt there’s a requirements.txt file in the git repository you’ll clone down later and doing a pip install -r requirements.txt should be enough to get the required Python modules in place. Full explaining what’s required to get ansible-playbook running is a little out of scope for this guide though.

Before we begin actually bootstrapping the cluster, we need to download the ansible playbook and related assets via git and checkout the 2.6.0 branch (since that’s what I tested whilst writing this):

[me@workstation ~]$ git clone
Cloning into 'kubespray'...
remote: Counting objects: 25302, done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 25302 (delta 0), reused 0 (delta 0), pack-reused 25300
Receiving objects: 100% (25302/25302), 7.85 MiB | 6.27 MiB/s, done.
Resolving deltas: 100% (13841/13841), done.
[me@workstation ~]$ cd kubespray/
[me@workstation kubespray]$ git checkout tags/v2.6.0

HEAD is now at 8b3ce6e4 bump upgrade tests to v2.5.0 commit (#3087)

After kubespray is available at the tested version, we can move onto actually defining the kind of cluster we want and bootstrapping it on the target nodes.

Bootstrapping the Cluster

Since this is just our first introduction, our needs won’t be too complex and so I’ll just modify the sample inventory. In a real world scenario you would probably want to rsync the stock sample inventory rather than overwriting it to preserve it as a reference though.

First let’s go into inventory/sample/hosts.ini (relative to the repo’s root directory) and change the contents to the following:





This is obviously just a regular ansible inventory file with four host groups:

  • kube-master is the host group for Kubernetes masters. In this case we just have the one.
  • kube-node is the regular non-master workload nodes
  • etcd are the nodes which will house the permanent etcd database. In our case we’re setting to to be the same system as kube-master but this needn’t be the case (and probably shouldn’t be on production systems).
  • k8s-cluster:children defines which of the other host groups come together to form the k8s-cluster hostgroup. In our case kube-master and kube-node come together to form the entire cluster.

Please note that all host names should be resolvable on the provisioning host or the ansible_host= argument used in the inventory file.

After this point, you’re technically ready to bootstrap the cluster but let’s change one last thing so that instead of the default overlay network of Calico we’ll use Weave instead and additionally enable the encryption of network traffic between pods on different hosts. To do this we edit the inventory/sample/group_vars/k8s-cluster.yml configuration file, locate the kube_network_plugin key and set it equal to weave. Additionally, to enable encryption on the next line add weave_password = password123 (changing the password to something else that suits you).

At this point both the inventory file and the group variables match our desired end state and so we can bootstrap the cluster by merely supplying the inventory file we’re using and ensuring we use -b so that all commands will be ran as root and the cluster.yml playbook. The full command is therefore: ansible-playbook -bi inventory/sample/hosts.ini cluster.yml

Be forewarned: As mentioned in the overview, this process is much slower than kops. One of its benefits is that it’s largely fire-and-forget though. I would give the entire process 20-30 minutes to complete (may be longer depending on a variety of factors).

Towards the end of the run, you’ll see your policy recap which should look similar to this:

PLAY RECAP ************************************************************************************************************************************************************************************
kube-master01 : ok=340   changed=108   unreachable=0   failed=0 
kube-worker01 : ok=220   changed=68    unreachable=0   failed=0 
kube-worker02 : ok=220   changed=68    unreachable=0   failed=0 
kube-worker03 : ok=219   changed=68    unreachable=0   failed=0 
localhost     : ok=2     changed=0     unreachable=0   failed=0

If it does, then congratulations you’re the proud owner of a brand new Kubernetes cluster.


When your cluster finishes bootstrapping, kubespray will configure kubectl for the root user on the master(s) so you should be able to ssh into that box and issue the following commands to create a new nginx service:

[me@workstation kubespray]$ ssh kube-master01

jad87@kube-master01:~$ sudo -i

root@kube-master01:~# kubectl run --image nginx --port 80 nginx
deployment.apps "nginx" created

root@kube-master01:~# kubectl expose deployment --type=NodePort nginx
service "nginx" exposed

root@kube-master01:~# kubectl describe service nginx | grep NodePort:
NodePort: <unset> 31582/TCP

In the above we:

  • Created a simple nginx deployment using the upstream Docker image for nginx
  • Exposed the service over a NodePort type service so that the port specified in the kubectl run command (the --port 80) will be accessibly on a random available port on each node in the cluster
  • Examined the service to determine that the port that was allocated was 31582 (your port number will differ obviously).

At this point all four of the following URL’s pull up the nginx landing page generated by the nginx process running in the cluster:


  • At the time of this writing, kubespray had issues with Ubuntu 18.04 not being able to install the Docker CE repositories due to pointing at an older repository. Using 16.04 works around this issue for the time being.
  • Many of the errors I’ve had with kubespray relate to memory issues. Please allow at least 2GB of memory for each VM. Otherwise the playbook run may error out at seemingly random points with no mention of VM memory being the issue.
  • If you have swap running on the systems, kubespray will disable it from running automatically but will not comment the line out in /etc/fstab so if you reboot one of the nodes and it begins failing to load kubelet again I would look at the swap configuration.
  • If you attempt to install more than one Kubernetes master please ensure that the total number is odd. By design kubespray will error out on even number Kubernetes masters to protect a sense of quorum.

Method #5: Managed Kubernetes Services

In general, managed Kubernetes allows you to create a kubernetes cluster that natively supports things such as the addition and removal of worker nodes in response to load (i.e auto-scaling) and to do so using normal computer nodes (GCE, EC2, etc). Since I did AWS earlier with kops and because I think Google’s managed Kubernetes cluster (“GKE”) product is easier to use, I’ll move over to Google Cloud.

Installing Required Software and Enabling the API

First things first, we need to ensure that the gcloud SDK is available on our system. Setting that up is a little out of scope for this article, but here is the Google Documentation for setting it up on Ubuntu (subsequent commands are ran on Ubuntu 18.04.1 LTS but should work for other distros as well). Once installed and gcloud init ran through properly, you should be good to go on the client, and you need only enable the Compute API in the API Dashboard for your project if not already enabled. If you had to enable the API, then I would give their systems 5-10 minutes to register this event.

Bootstrapping the Cluster

OK like I said before Google Cloud makes managing a Kubernetes cluster pretty easy:

[me@workstation ~]$ gcloud container clusters create test-cluster
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
This will enable the autorepair feature for nodes. Please see for more
information on node autorepairs.

WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the latter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
Creating cluster test-cluster...done. 
Created [].
To inspect the contents of your cluster, go to:
kubeconfig entry generated for test-cluster.
test-cluster us-east4-a  1.9.7-gke.5  n1-standard-1  1.9.7-gke.5  3          RUNNING

And boom. Your cluster is now running.

The create-cluster command automatically spits out the valid ~/.kube/config file for your current context so you should also be able to see all the nodes it spun up:

[me@workstation ~]$ kubectl get nodes 
NAME                                         STATUS  ROLES   AGE  VERSION 
gke-test-cluster-default-pool-a0416b96-727l  Ready   <none>  6m   v1.9.7-gke.5 
gke-test-cluster-default-pool-a0416b96-gw61  Ready   <none>  6m   v1.9.7-gke.5
gke-test-cluster-default-pool-a0416b96-kpdb  Ready   <none>  6m   v1.9.7-gke.5


Running the nginx test deployment is pretty straight forward. From the provisioning system with gcloud installed:

[me@workstation ~]$ kubectl run --image nginx --port 80 nginx
deployment.apps/nginx created

[me@workstation ~]$ kubectl expose deployment --type=LoadBalancer nginx
service/nginx exposed

[me@workstation ~]$ kubectl describe service nginx | grep "LoadBalancer Ingress"
LoadBalancer Ingress:

Since (unlike Amazon) Google’s LoadBalancer is accessed via IP address, this should be immediately available at the URL: which for me it indeed was.


There’s not much to troubleshoot here really. As long as the gcloud SDK is on your system and functioning properly that’s all that’s really required. Everything outside of that would be cloud provider issues such as billing problems.

Comparison of Approaches

Obviously you have to to weigh the pros and cons of each of the above approaches and decide for yourself which is the easiest and most effective for what you’re trying to do. For example, nothing is going to quite replace minikube for individual development but it’s next to useless if you’re trying to construct an on-premises microservice-oriented application at scale. Personal experience and personal preference will inform your choices.

In general though we can have some rules of thumb many would find acceptable:

  • If you’re just trying to get to the “Kubernetes in Production” state without specific requirements, you probably want “managed kubernetes”
  • If you’re trying to learn more about how Kubernetes works or have very specific requirements that managed Kubernetes doesn’t allow for then using kops or kubespray makes sense. For instance many managed solutions may prevent certain features of Kubernetes from being available due to the feature still being alpha.
  • If you’re worried about vendor lock-in running something you manage yourself might be preferable. For instance, kops supports many backend cloud providers and even if GKE exists, you may want your workflow to remain as unchanged as possible should you decide to migrate everything over to EC2.
  • If you’re trying to deploy on-premises then kubespray will probably save you the most labor hours. The amount of time it takes is roughly the same as manually installing it but it has a “fire-and-forget” nature that requires less actual operator time.

Further Reading