Multi-Node Kubernetes Setup
This guide provides instructions for setting up a multi-node Kubernetes cluster using either microk8s or k3s. The guide assumes a minimum three-node cluster configuration for high availability.
Prerequisites
Hardware Requirements
Per node:
4 Cores
16 GB RAM
50GB disk space
Static private IP
Network Requirements
TCP/UDP communication between nodes enabled
All nodes can reach each other
Firewall/Security Group rules configured for inter-node communication
Example Setup
Assume three nodes:
Node1: Primary control plane
Node2: Secondary control plane
Node3: Secondary control plane
Installation Options
Choose one of the following installation methods:
microk8s Installation
1. Install microk8s on All Nodes
# On each node
sudo snap install microk8s --classic
sudo usermod -a -G microk8s $USER
sudo chown -R $USER ~/.kube
newgrp microk8s2. Create Cluster
On Node1 (Primary)
# Get join command
microk8s add-nodeSample output:
From the node you wish to join to this cluster, run the following:
microk8s join 192.168.1.100:25000/abcdef0123456789/bcdef01234567890On Node2
# Join the cluster
microk8s join 192.168.1.100:25000/abcdef0123456789/bcdef01234567890On Node3
# Get another join token from Node1
microk8s add-node # Run this on Node1
# Join the cluster
microk8s join 192.168.1.100:25000/xyz123456789/yzx123456789 # Run this on Node33. Verify Cluster
# On any node
microk8s status --wait-readyExpected output showing HA status:
high-availability: yes
datastore master nodes: 192.168.1.100:19001 192.168.1.101:19001 192.168.1.102:19001k3s Installation
1. Set Up First Server Node (Node1)
# Install k3s server
curl -sfL https://get.k3s.io | sh -s - server \
--cluster-init \
--token=YOUR_TOKEN2. Join Additional Server Nodes
# On Node2 and Node3
curl -sfL https://get.k3s.io | sh -s - server \
--server https://NODE1_IP:6443 \
--token=YOUR_TOKEN3. Verify Cluster
# Check cluster status
kubectl get nodes -o wide
# Expected output showing all nodes with control-plane roles
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,etcd,master 10m v1.28.5+k3s1
node2 Ready control-plane,etcd,master 8m v1.28.5+k3s1
node3 Ready control-plane,etcd,master 8m v1.28.5+k3s1Deploy Observo Site
1. Configure Access (On Node1)
# For microk8s
microk8s config > ~/.kube/config
# For k3s
sudo cat /etc/rancher/k3s/k3s.yaml > ~/.kube/config2. Deploy Site
# Apply cert-manager CRDs
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.crds.yaml
# Deploy Observo Site
helm upgrade --install -n observo-client observo-site \
oci://public.ecr.aws/e4z0a1h1/observo-site \
--create-namespace \
--values=/path/to/downloaded_config.yamlConfigure Access
1. Configure nginx on Each Node
# Install nginx
sudo apt install nginx-full nginx-extras
# Get service IP
export SERVICE_IP=$(kubectl get svc -n observo-client data-plane-gateway-service -o jsonpath='{.spec.clusterIP}')
# Configure nginx stream
sudo tee /etc/nginx/conf.d/stream/tcp-proxy.conf << EOF
stream {
map \$server_port \$destination_port {
default 10001;
~^(\d+)$ \$1;
}
server {
listen 10001-10050;
proxy_pass ${SERVICE_IP}:\$destination_port;
proxy_connect_timeout 1s;
}
}
EOF
# Update nginx config
sudo sed -i '/^http {/i include /etc/nginx/conf.d/stream/*.conf;' /etc/nginx/nginx.conf
# Restart nginx
sudo systemctl restart nginxVerify Installation
1. Check Node Status
kubectl get nodes
kubectl get pods -n observo-client -o wide2. Test Connectivity
# Test from each node
curl -v http://localhost:10001/healthHigh Availability Testing
1. Test Node Failover
# Stop k3s/microk8s on one node
sudo systemctl stop k3s # For k3s
sudo systemctl stop snap.microk8s.daemon-kubelite # For microk8s
# Check cluster status
kubectl get nodes2. Check Pod Distribution
kubectl get pods -n observo-client -o wide --sort-by='.spec.nodeName'Troubleshooting
Network Connectivity
# Test inter-node communication
ping <node-ip>
# Test k8s API server
curl -k https://NODE1_IP:6443Cluster Status
# Check etcd status (k3s)
kubectl get events -A | grep etcd
# Check microk8s clustering
microk8s statusNode Issues
# Check node conditions
kubectl describe node <node-name>
# Check system logs
journalctl -u k3s
journalctl -u snap.microk8s.daemon-kubeliteMaintenance
Backup Etcd (k3s)
k3s etcd-snapshot saveUpdate Nodes
# For microk8s
sudo snap refresh microk8s --channel=1.28/stable
# For k3s
curl -sfL https://get.k3s.io | sh -For additional assistance, refer to:
Last updated
Was this helpful?

