Kubernetes and KubeVirt from Baremetal to Private Cloud

What we are going to Build

  • A platform to handle both VM, and Containers.
  • Software Defined Networking and Software Defined Storage
  • Provide a single Unified command line to manage Network, Storage, Virtual Machines, and Containers.

To raise the challenge even more

  • I am going to build on my 8GB Personal Laptop
  • I do not have any previous knowledge to the common technology such as VMWARE, OpenStack or Apache Stack
  • Using only my Kubernetes Knowledge.
  • I am going to put in Single Article
  • Using only Opensource documentation from the Project documentation with Zero Coding.
  • Nothing Pre-installed will start with Empty VM or Empty Machine

Sounds Impossible!!!!! will explain in the upcoming section, unless you know KubeVirt you can skip to the implementation steps

If we can start by defining the cloud

Cloud used be defined in very simple way as follow XaaS, X as a Service’

  • IaaS Infrastructure as a Service
  • PaaS Platform as a Service
  • SaaS Software as a Service

Ingredient

  • Few Baremetal machines.
  • Few Networking device.
  • Few Flash Disk.
  • Few Master Minds in networking, storage and virtualization to assemble all of that together.

What it take to build a modern Private Cloud?

One word the SDX family with the Rise of Cloud native more and more SDX come to play but what is SDX it Software Defined X The most commonly known components are

  • SDN : Software Defined Networking
  • SDS : Software Defined Storage

SDN Software Defined Network

Controlling the traffic between your Virtual machines, is no longer configuration in your physical switch or router, with SDN we build some sort of Overlay across all your physical host, and start rebuild the whole network and security policies around it.

SDS Software Defined Storage

Providing a shared storage pool, is by bringing up All Flash appliance, or spinning Desks, The SDS way of doing it is different bring some commodity machines with SSD or NVME and install SDS software on top of them, so you will your Storage As A Service ready to serve ;).

Kubernetes is more than just Container Orchestrator

Kubernetes is the most used platform for container orchestration, a closer look at Kubernetes reveal that is is more than Just Container Orchestrator, as It provides the CXI where X is N or S

  • CNI : Container Network Interface
  • CSI : Container Storage Interface

CNI and CSI in Kubernetes

Which are standard specs and libraries to be adopted by the industry and community to integrate Kubernetes with SDN such as NSX-T, Cisco ACI, Calico, …. .Also CSI which give provide a standardized interface for Storage as well as Software Defined Storage.

Custom Resource Definition

In Addition to standard Object, Kubernetes is extended with custom resource definition which introduce additional custom object, such as VM along side containers.

Building a Private cloud that handle Both Virtual Machines and Containers

Kubernetes to be installed on top Baremetal machine, or Virtual Machine, on top of Hypervisor

Advantage of this approach

The products that implement this design is mature as Opensource(Openstack, Kubernetes, Cloudfoundry, Mesos, Apache Stack, oVirt, ..) , or as commercial (VMware, Redhat Openstack, Redhat Virtualization, …).

Disadvantage

IT IS COMPLEX !!!!!!!!!!!! two different products , different types of APIs, different dashboard Different Highly trained and skilled engineer. Compatibility needs to insure between all different components Two types of Control plan, that consume additional resources. Cross network policies between Virtual Machines and Containers, required additional product such as NSX-T, contrail, … .

Kubevirt when Kubernetes meets LibVirt.

Let us put everything together, CNI, CSI , CRD, which can be used to define and create a complete Virtual Machine not only Containers, or customized light weight VM, It is full VM and can be created, managed by Kubernetes as well as all Networking, Storage, provided by the Kubernetes API.

Advantage of this approach

Simpler at least on the diagram Single control plan Mainly Kubernetes Skills is required Storage and Networking will be handled as Kubernetes resources.

Disadvantage

Still new technology as opensource, or Commercial ( Still Commercial support will be provided by Redhat CNV still in Tech preview as per the time of writing this article UI and Dashboard not available, Redhat CNV will introduce it as part of their OpenShift Console.

Without Further Ado let’s build a Private Cloud based Kubernetes

Ingredients:

  • 8 GB laptop Windows, Mac or Linux
  • Home Switch any brand
  • Hypervisor layer unless you want to run on your machine directly (I am using Windows Hyper-v)

Step 1: Create the VM or If you additional machine skip this.

I am using Hyper-V, and connecting my connection to host network directly Machine Specs I used 4 Cores 4 GB RAM

Step 2: Mount the Host OS iso, I am using Ubuntu 8.10

In Hyper-V disable secure boot, in order to be to install Linux

Step 3: Install the Operating System

Few steps nothing special just follow the default settings all configuration will be done as post installation, the key aspect is to insure that your IP address is taken from your home switch by configuring the machine as host network.

Step 4: Install Docker

4.A : Open the machine from the console to get the IP as well as ping any public web site to insure there public internet connectivity

curl https://raw.githubusercontent.com/ahmadsayed/useful-tools/master/install-docker-ubuntu-1804.sh |  sh -
sudo docker run hello-world

Step 5: Install Kubernetes

Yes, we will use the kubeadm provided from kubernetes documentation (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl — system

Installing kubeadm, kubelet and kubectl
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo swapoff -a
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
## In my case I have single machine, so I will make master schedulable by removing the master label
kubectl taint nodes --all node-role.kubernetes.io/master-

Step 6: Configure Dynamically provision Software Defined Block Storage

For storage we have many options, which require quiet resources, in our cloud we will pick only block storage, by using Rancher Project Longhorn as it is the easiest to install.

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
kubectl edit svc longhorn-frontend -n longhorn-system
step6

Install KubeVirt.

As of now we built a functional Kubernetes Cluster with only opensource technologies, time to give it capabilities to provision a virtual machine

Step 7: Install libvirt on the all host machines, in my case one machine

sudo apt install libvirt-clients
virt-host-validate qemu
ahmed@mycloud:~$ virt-host-validate qemu
QEMU: Checking for hardware virtualization : FAIL (Only emulated CPUs are available, performance will be significantly limited)
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'memory' controller mount-point : PASS
QEMU: Checking for cgroup 'cpu' controller support : PASS
QEMU: Checking for cgroup 'cpu' controller mount-point : PASS
QEMU: Checking for cgroup 'cpuacct' controller support : PASS
QEMU: Checking for cgroup 'cpuacct' controller mount-point : PASS
QEMU: Checking for cgroup 'cpuset' controller support : PASS
QEMU: Checking for cgroup 'cpuset' controller mount-point : PASS
QEMU: Checking for cgroup 'devices' controller support : PASS
QEMU: Checking for cgroup 'devices' controller mount-point : PASS
QEMU: Checking for cgroup 'blkio' controller support : PASS
QEMU: Checking for cgroup 'blkio' controller mount-point : PASS
WARN (Unknown if this platform has IOMMU support)
kubectl create namespace kubevirt
kubectl create configmap -n kubevirt kubevirt-config \
--from-literal debug.useEmulation=true

Step 8: Installing kubevirt

export VERSION=v0.27.0
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator.yaml
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml
kubectl -n kubevirt wait kv kubevirt --for condition=Available
kubectl get po -n kubevirt

Step 9: Install virt client tool

To manage our minimalist cloud we need a single command line like ibmcloud in case of IBM, az in case Azure, aws in case of Amazong, in our case our global command is not surprise kubectl

export VERSION=v0.27.0
wget https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-linux-x86_64
(
set -x; cd "$(mktemp -d)" &&
curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/krew.{tar.gz,yaml}" &&
tar zxvf krew.tar.gz &&
KREW=./krew-"$(uname | tr '[:upper:]' '[:lower:]')_amd64" &&
"$KREW" install --manifest=krew.yaml --archive=krew.tar.gz &&
"$KREW" update
)
export PATH="${PATH}:${HOME}/.krew/bin"
kubectl krew install virt

Creating the first Virtual machine

Time to create a virtual machine, and we will not get simple OS such busybox, or cirrOS, we will bring up a real fedora machine

nano testvm.yaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
metadata:
name: testvmi-nocloud
spec:
terminationGracePeriodSeconds: 30
domain:
resources:
requests:
memory: 1024M
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: emptydisk
disk:
bus: virtio
- disk:
bus: virtio
name: cloudinitdisk
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo:latest
- name: emptydisk
emptyDisk:
capacity: "2Gi"
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
password: fedora
chpasswd: { expire: False }
kubectl virt console  testvmi-nocloud

Networking Time

Step 11: Accessing the Internet

login the machine via fedora/fedora

curl https://www.google.com
[fedora@testvmi-nocloud ~]$ curl https://www.google.com
curl: (6) Could not resolve host: www.google.com
kubectl edit cm  coredns -n kube-system
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
forward . /etc/resolv.conf
forward . 8.8.8.8
kubectl delete -f testvm.yaml
kubectl create -f testvm.yaml
curl https://www.google.com

Step 13: VM to Pod Communication

Access the machine via ssh from another pod

kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.5/samples/sleep/sleep.yaml
kubectl get vmi #get the Cluster IP of the Virtual Machine
kubectl get po
kubectl exec -ti sleep-8f795f47d-zj7bx -- sh #Replace it with your Pod name
ping 192.168.188.34

Step 14 : Add the VM to service discover a.k.a kubernetes DNS coredns, the sameway we define service to Kubernetes we need to modify the VM yaml file as well as create a service

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
metadata:
name: testvmi-nocloud
labels:
vm: fedora
spec:
terminationGracePeriodSeconds: 30
domain:
resources:
requests:
memory: 1024M
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: emptydisk
disk:
bus: virtio
- disk:
bus: virtio
name: cloudinitdisk
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo:latest
- name: emptydisk
emptyDisk:
capacity: "2Gi"
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
password: fedora
chpasswd: { expire: False }
apiVersion: v1
kind: Service
metadata:
name: vmservice
spec:
selector:
vm: fedora
clusterIP: None
ports:
- name: port-22 # this is just dns entry no port needed as per documentation
port: 22
targetPort: 22
kubectl get po 
kubectl exec -ti sleep-8f795f47d-zj7bx -- sh
ping vmservices

The Storage

Step 16: Empty disk the ephemeral storage.

How the storage will be handled recall the longhorn storage solution the current specification uses empty dir as second storage

- name: emptydisk
emptyDisk:
capacity: "2Gi"

Step 17: Using shared storage provided by SDS instead of empty dir container storage.

First Create Persistence Volume

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhornpvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: longhorn
kubectl delete -f testvm.yaml
kubectl create -f testvm.yaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
metadata:
name: testvmi-nocloud
spec:
terminationGracePeriodSeconds: 30
domain:
resources:
requests:
memory: 1024M
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: blockdisk
disk:
bus: virtio
- disk:
bus: virtio
name: cloudinitdisk
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo:latest
- name: blockdisk
persistentVolumeClaim:
claimName: longhornpvc
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
password: fedora
chpasswd: { expire: False }
-bash-4.4$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 4G 0 disk
└─vda1 252:1 0 4G 0 part /
vdb 252:16 0 7.8G 0 disk
vdc 252:32 0 366K 0 disk
sudo mkfs.ext4 /dev/vdb
sudo mkdir /mnt/disk1
sudo mount /dev/vdb /mnt/disk1

Final words

IT Virtualization and Containers, used to be referred as OR relation, should I deploy my workload as VM or as containers, but from Infrastructure this relation always an AND relation, because it still Kubernetes most probably deployed on top of hypervisor and worker nodes is just a virtual machines.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store