Kubernetes Installation Guide
Complete guide for installing Kubernetes using various methods including kubeadm, Minikube, kind, and managed services
Kubernetes Installation Guide
Comprehensive guide for installing Kubernetes in different environments, from development to production.
Prerequisites
System Requirements
- CPU: 2+ cores
- RAM: 2GB+ (4GB+ recommended)
- Disk: 20GB+ free space
- Network: Full network connectivity between nodes
- Operating System: Linux (Ubuntu 20.04+, CentOS 7+, RHEL 7+)
Required Tools
- Docker or containerd runtime
- curl or wget
- sudo access
Installation Methods Overview
kubeadm
Production-ready clusters with full control
Minikube
Local development environment
kind
Testing and CI/CD environments
Managed Services
Cloud-managed Kubernetes services
| Method | Best For | Complexity | Time |
|---|---|---|---|
| kubeadm | Production clusters | Medium | 30-60 min |
| Minikube | Local development | Low | 10-15 min |
| kind | Testing/CI | Low | 5-10 min |
| Managed | Production (cloud) | Low | 15-30 min |
Method 1: kubeadm (Production-Ready)
Step 1: Prepare All Nodes
Run these commands on all nodes (master and workers):
# Update system
sudo apt update && sudo apt upgrade -y
# Install required packages
sudo apt install -y apt-transport-https ca-certificates curl gpg
# Disable swap (required by Kubernetes)
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# Load required kernel modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# Set required sysctl parameters
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --systemStep 2: Install Container Runtime (containerd)
# Install containerd
sudo apt install -y containerd
# Configure containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
# Enable SystemdCgroup
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
# Restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerdStep 3: Install Kubernetes Components
# Add Kubernetes APT repository
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
# Update package index
sudo apt update
# Install kubelet, kubeadm, and kubectl
sudo apt install -y kubelet kubeadm kubectl
# Hold packages to prevent automatic updates
sudo apt-mark hold kubelet kubeadm kubectl
# Enable kubelet
sudo systemctl enable kubeletStep 4: Initialize Master Node
Run only on the master node:
# Initialize cluster
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
# Set up kubectl for regular user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configStep 5: Install Pod Network (CNI)
# Install Flannel CNI
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
# Alternative: Install Calico CNI
# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yamlStep 6: Join Worker Nodes
On each worker node, run the join command provided by kubeadm init:
# Example (replace with your actual token and hash)
sudo kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>If you lost the join command, generate a new one on the master:
kubeadm token create --print-join-commandMethod 2: Minikube (Development)
Installation
# Install Minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube /usr/local/bin/
# Install kubectl (if not already installed)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Start Minikube
minikube start --driver=docker
# Set kubectl context
kubectl config use-context minikubeUseful Minikube Commands
# Check status
minikube status
# Stop cluster
minikube stop
# Delete cluster
minikube delete
# Access Kubernetes dashboard
minikube dashboard
# SSH into node
minikube sshMethod 3: kind (Testing)
Installation
# Install kind
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# Create cluster
kind create cluster --name my-cluster
# Set kubectl context
kubectl cluster-info --context kind-my-clusterMulti-node kind Cluster
Create a config file kind-config.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker# Create multi-node cluster
kind create cluster --name multi-node --config kind-config.yamlMethod 4: Managed Services
# Install eksctl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
# Create cluster
eksctl create cluster --name my-cluster --region us-west-2 --nodegroup-name worker-nodes --node-type t3.medium --nodes 2# Create cluster
gcloud container clusters create my-cluster --zone us-central1-a --num-nodes 2
# Get credentials
gcloud container clusters get-credentials my-cluster --zone us-central1-a# Create resource group
az group create --name myResourceGroup --location eastus
# Create cluster
az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 2 --enable-addons monitoring --generate-ssh-keys
# Get credentials
az aks get-credentials --resource-group myResourceGroup --name myAKSClusterPost-Installation Setup
Install Helm (Package Manager)
# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Add popular repositories
helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo updateInstall Metrics Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yamlInstall Ingress Controller
# NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
# Or using Helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginxVerification
Check Cluster Status
# Check nodes
kubectl get nodes -o wide
# Check system pods
kubectl get pods -n kube-system
# Check cluster info
kubectl cluster-info
# Check component status
kubectl get componentstatusesDeploy Test Application
# Create test deployment
kubectl create deployment nginx --image=nginx
# Expose deployment
kubectl expose deployment nginx --port=80 --type=NodePort
# Check deployment
kubectl get deployments
kubectl get pods
kubectl get services
# Test access (for NodePort)
kubectl get service nginx
# Access via http://<node-ip>:<node-port>Troubleshooting
Common Issues
Pods stuck in Pending state
kubectl describe pod <pod-name>
# Check for resource constraints or node issuesNode not joining cluster
# Check firewall rules (ports 6443, 2379-2380, 10250-10252)
sudo ufw allow 6443
# Check token validity
kubeadm token listCNI Network Issues
# Check CNI pods
kubectl get pods -n kube-system | grep -E "(flannel|calico|weave)"
# Restart CNI
kubectl delete pods -n kube-system -l app=flannelkubelet Not Starting
sudo systemctl status kubelet
sudo journalctl -u kubelet -f
# Common fix: ensure swap is disabled
sudo swapoff -aUseful Commands
# View logs
kubectl logs <pod-name>
kubectl logs -f deployment/<deployment-name>
# Describe resources
kubectl describe pod <pod-name>
kubectl describe node <node-name>
# Execute commands in pods
kubectl exec -it <pod-name> -- bash
# Port forwarding
kubectl port-forward <pod-name> 8080:80
# Get events
kubectl get events --sort-by=.metadata.creationTimestampNext Steps
After successful installation:
Security
Configure RBAC, network policies, and pod security standards
Monitoring
Install Prometheus and Grafana for cluster monitoring
AWX Deployment
Deploy AWX automation platform on your cluster