Experiment, Fail, Learn, Repeat

Life is too exciting to just keep still!

Attempting to setup Kubernetes on Ubuntu VMs

This post details my naive attempt to bring up a Kubernetes cluster on a VM. These steps to try out Kubernetes in a bare Google Virtual Machine (but the following steps should work for most Debian/Ubuntu virtual machines). This deploys a single node Kubernetes cluster (naturally don’t think of using this for production)

lxc is the client to lxd which runs linux containers. https://en.wikipedia.org/wiki/LXC. The conjure-up tool would install kubernetes via linux commands

Installing snap, lxc and conjure-up

  • We would be testing to deploy a Kubernetes Cluster on a single node using the snap utility. These are the set of commands to do it.
  • We would install snap via snapd. Then, using snap, we can then install lxd, kubectl and conjure-up
  • We would finally then add our own username to the lxd groups so that we don’t need to use sudo when lxd/lxc commands.
  • We would use lxc to communicate to lxd
sudo apt update
sudo apt -y install snapd
sudo snap install lxd
sudo snap install kubectl --classic
sudo snap install conjure-up --classic
sudo usermod --append --groups lxd {{ NAME }}

Deploy a Kubernetes cluster via conjure-up

  • We would first initialize the set of environment for the linux containers to live in; using the init command, we would start up the network as well as storage. One thing to note is that for the snap install Kubernetes command, we are only able dir type of storage. Other storage will cause the deployment to completely halt.
# Most defaults work ok except for storage. Storage, choose dir type
lxd init

# Questions and choices
# Would you like to use LXD clustering? (yes/no) [default=no]: no
# Do you want to configure a new storage pool? (yes/no) [default=yes]:
# Name of the new storage pool [default=default]:
# Name of the storage backend to use (btrfs, ceph, dir, lvm) [default=btrfs]: dir
# Would you like to connect to a MAAS server? (yes/no) [default=no]:
# Would you like to create a new local network bridge? (yes/no) [default=yes]:
# What should the new bridge be called? [default=lxdbr0]:
# What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
# What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
# Would you like LXD to be available over the network? (yes/no) [default=no]:
# Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
# Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

conjure-up kubernetes

# Questions and choices
# What kind of kubernetes installation: kubernetes-core
# Where to install: localhost
# Storage pool: default
# Network bridge: lxdbr0
# Network: flannel

The generated lxd init file

config: {}
  - config:
      ipv4.address: auto
      ipv6.address: none
    description: ""
    managed: false
    name: lxdbr0
    type: ""
  - config: {}
    description: ""
    name: default
    driver: dir
  - config: {}
    description: ""
        name: eth0
        nictype: bridged
        parent: lxdbr0
        type: nic
        path: /
        pool: default
        type: disk
    name: default
cluster: null

The conjure-up command takes a while to run. It depends on virtual machine’s network. The faster the network, the faster this can be deployed. After waiting for a while, the kubernetes

Testing out kubernetes cluster

We would try to run some nginx containers as they one of the simplest to run. With the nginx containers, we should able to hit the service on port 80 and it should return us a default nginx page.

kubectl run --image nginx lol
kubectl run --image nginx lol1
kubectl expose deployments lol --type NodePort --port 80
kubectl exec -it {{ lol1-pod-name }} /bin/bash

Within the container, we can test against lol container to see if it would be able to provide the default nginx http page.

apt update
apt install curl
curl {{ ip address of lol }}:80

Unfortunately, till date, I haven’t been able to expose the kubernetes cluster any traffic from the outside world. There several tactics that one can try but none of them work for me. It could be misconfiguration from my part. Networking is a serious pain here and there are many solutions that could potentially solve the issue but I’m not exactly sure why or why not something would work.

Some of the possible actions to get traffic to the cluster. However, I couldn’t get any of them to work here:

  • Using iptables. A lot of custom configuration. I’ve tred using FORWARD, manipulating the ACCEPT and even attempted REDIRECT but none seem to work
  • Using Nodeports but its definitely not the most ideal solution here. It exposes really weird ports: 30000+ range. There is a high possibility that such ports are blocked in company networks so it wouldn’t really make much sense to try it out here.
  • Using Kubernetes Ingress. Requires a domain name for it to work well. This is also a whole bunch of configuration work but the main issue here is that we’re not too sure if any traffic that is hitting the host machine is actually hitting the kubernetes cluster. It’s hard to inspect for that - tools are definitely needed to check for this.
  • Using externalip and externalname. These resources would not be managed by Kubernetes, however, I’m not too sure why these aren’t working as expected as well

I will still attempt to play around with this tool for deploying Kubernetes clusters but finding a solution to expose the traffic out would take a while