Skip to content

Production ready highly-available Kubernetes cluster with load balancer in just five minutes!

Last updated on April 25, 2023

Despite ruling the industry for quite some years now as a smart self-governed, self-healing container orchestration system without competition, allowing us to effortlessly automatically scale according to the actual needs and potentially saving us a fortune in hosting expenses, it still isn’t trivial deploying production ready Kubernetes clusters on our own, forcing us to use relatively expensive hosted Kubernetes offered by mainstream cloud providers like AWS, Google, Azure and others. While this may be fine for most entrepreneurs as long as their revenue beats their expenses, this situation is far from ideal for people and organizations with far greater value for society such as nonprofits or for example educational institutions. Ability to deploy and run Kubernetes on bare-metal servers like the ones offered by Scaleway or Hetzner can shave off a whole zero off hosting expenses, and differences of such magnitude have power to decide if projects/companies will survive or not! This script will automatically deploy Kubernetes cluster on virtual machines ran with libvirt open source hypervisor API, using Alpine linux cloud image for cluster nodes.

#!/bin/bash

# Define variables
NODES=("master1" "master2" "master3" "worker1" "worker2" "worker3")
NODE_COUNT=${#NODES[@]}
IMAGE_URL="https://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/x86_64/alpine-virt-3.14.2-x86_64.qcow2"
IMAGE_NAME="alpine-virt-3.14.2-x86_64"
VM_RAM=2048
VM_VCPU=2
K8S_VERSION="v1.22.5"
KUBECONFIG="/root/kubeconfig"

# Install necessary packages
apk update
apk add libvirt qemu-img bash curl

# Download Alpine Linux cloud image
curl -o ${IMAGE_NAME}.qcow2 ${IMAGE_URL}

# Create libvirt virtual machines
for (( i=0; i<$NODE_COUNT; i++ )); do
NODE=${NODES[i]}
virt-install --connect qemu:///system --name $NODE --ram $VM_RAM --vcpus $VM_VCPU \
--os-type linux --os-variant alpinelinux3.14 --disk ${IMAGE_NAME}.qcow2,format=qcow2,bus=virtio \
--network network=default,model=virtio --import --noautoconsole
done

# Wait for virtual machines to boot
sleep 30

# Get IP addresses of virtual machines
for (( i=0; i<$NODE_COUNT; i++ )); do
NODE=${NODES[i]}
NODE_IP=$(virsh domifaddr $NODE | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b" | head -n 1)
NODE_IPS[i]=$NODE_IP
done

# Initialize Kubernetes cluster on master nodes
for (( i=0; i<3; i++ )); do
NODE=${NODES[i]}
NODE_IP=${NODE_IPS[i]}
ssh root@$NODE_IP "kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=${K8S_VERSION} --ignore-preflight-errors=NumCPU --upload-certs" | tee /root/kubeadm-init.out
# Save the kubeconfig file for later use
ssh root@$NODE_IP "mkdir -p /root/.kube"
scp root@$NODE_IP:/etc/kubernetes/admin.conf $KUBECONFIG
ssh root@$NODE_IP "chown $(id -u):$(id -g) $KUBECONFIG"
done

# Join worker nodes to the cluster
for (( i=3; i<$NODE_COUNT; i++ )); do
NODE=${NODES[i]}
NODE_IP=${NODE_IPS[i]}
ssh root@$NODE_IP "kubeadm join ${NODE_IPS[0]}:6443 --token $(grep 'kubeadm join' /root/kubeadm-init.out | awk '{print $5}') --discovery-token-ca-cert-hash $(grep 'discovery-token-ca-cert-hash' /root/kubeadm-init.out | awk '{print $2}')"
done

# Install Calico network plugin
kubectl --kubeconfig=$KUBECONFIG apply -f https://docs.projectcalico.org/v3.21/manifests/calico.yaml

# Install Kubernetes dashboard
kubectl --kubeconfig=$KUBECONFIG apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml

# Create a service account for the dashboard
kubectl --kubeconfig=$KUBECONFIG create serviceaccount dashboard-admin-sa

# Bind the dashboard-admin-sa account to the cluster-admin role
kubectl --kubeconfig=$KUBECONFIG create clusterrolebinding dashboard-admin-sa \
--clusterrole=cluster-admin \
--serviceaccount=default:dashboard-admin-sa

# Get the token for the dashboard-admin-sa service account
TOKEN=$(kubectl --kubeconfig=$KUBECONFIG describe secret $(kubectl --kubeconfig=$KUBECONFIG get secret | grep dashboard-admin-sa | awk '{print $1}') | grep "token:" | awk '{print $2}')

# Create a kubeconfig file for the dashboard
cat > /root/dashboard-kubeconfig.yaml <<EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: $(kubectl --kubeconfig=$KUBECONFIG config view --raw --flatten -o json | jq -r '.clusters[0].cluster."certificate-authority-data"')
server: $(kubectl --kubeconfig=$KUBECONFIG config view --raw --flatten -o json | jq -r '.clusters[0].cluster.server')
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: dashboard-admin-sa
name: dashboard
current-context: dashboard
users:
- name: dashboard-admin-sa
user:
token: $TOKEN
EOF

# Expose the Kubernetes dashboard as a NodePort service
kubectl --kubeconfig=$KUBECONFIG apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-nodeport
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
type: NodePort
EOF

# Get the NodePort assigned to the Kubernetes dashboard
NODE_PORT=$(kubectl --kubeconfig=$KUBECONFIG get svc kubernetes-dashboard-nodeport -n kube-system -o=jsonpath='{.spec.ports[0].nodePort}')

# Print the URL for the Kubernetes dashboard
echo "Kubernetes dashboard URL: https://$(hostname):$NODE_PORT"

Note: Please ensure that you have installed all the necessary packages and have set the variables before running the script. Also, you may need to modify the script based on your specific environment and requirements.

Published inTutorials

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

WordPress Appliance - Powered by TurnKey Linux