Install k8s on ubuntu 20.10 with containerd
This post is second in a series where we install k8s with containerd as runtime. Refer to the first part here where we installed containerd in preparation for installing k8s. This post borrows from the official documentation. This post targets k8s version v1.26.1 which is the latest version as of this post.
Open required ports:
A Kubernetes cluster typically requires the following ports to be open:
- TCP port 6443 for the Kubernetes API server.
- TCP and UDP port 2379–2380 for etcd server client communication.
- TCP and UDP port 10250–10252 for Kubelet API.
- TCP and UDP port 10259 for kube-scheduler.
- TCP and UDP port 10257 for kube-controller-manager.
- TCP and UDP port 30000–32767 for node to node communication.
Make these these ports are open. In addition to these port, CNI providers like flannel, calico etc require specific ports like 8285, 8472 to be open. Consult vendor documentations for additional information.
In a stand alone ubuntu box we can use the following commands to open a port or a port range:
sudo ufw allow 6443
sudo ufw allow 10250:10252/tcp
Check for processes occupying ports:
sudo apt install net-tools -y
sudo netstat -tupln | grep 6443
We want use the current node as the control panel node. Hence we would install kubeadm, kubectl and kubelet on this node. Worker nodes don’t need to have kubectl.
Execute the following commands:
Update the apt
package index and install packages needed to use the Kubernetes apt
repository:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
Download the Google Cloud public signing key:
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
Add the Kubernetes apt
repository:
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update apt
package index, install kubelet, kubeadm and kubectl, and pin their version:
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Note: We are using apt-mark hold here to prevent above binaries namely kubelet, kubeadm and kubectl from automatically upgraded, removed or installed.
At this point we are ready to initialize our k8s cluster.
Initialize the k8s master node:
sudo kubeadm init
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enable kubectl bash autocompletion(Optional):
echo 'source <(kubectl completion bash)' >>~/.bashrc
source ~/.bashrc
At this point running “kubectl get nodes” would show following status:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
server3 NotReady control-plane 3m12s v1.26.1
Similarly querying k8s pods would show following status:
kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-787d4945fb-2ncf8 0/1 Pending 0 3m30s
coredns-787d4945fb-db8s9 0/1 Pending 0 3m30s
etcd-server3 1/1 Running 0 3m34s
.....
NotReady and pending status is due to the fact that we have not yet installed any CNI plugin implementation yet. Let’s do that next.
Install CNI plugin:
We have various options for CNI plugins like weave, flannel, calico and Cilium etc. They provide service discovery, container communication with each other without needing to know their IP addresses and various other functionalities like k8s network policies etc. Consult vendor documentation for their respective functionalities.
We will be using the Cilium CNI plugin for our k8s cluster.
Install the Cilium CLI:
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
Install Cilium:
cilium install
Now if we query our cluster node /pods — they should be in ready state:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
server3 Ready control-plane 97m v1.26.1
kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
cilium-5nhl5 1/1 Running 0 110s
cilium-operator-f59cbd8c6-9fkp6 1/1 Running 0 110s
coredns-787d4945fb-2ncf8 1/1 Running 0 98m
coredns-787d4945fb-db8s9 1/1 Running 0 98m
Our master node is ready now. Let’s remove its taint which prevents pods being scheduled to the master node.
kubectl describe nodes server3 | grep -i taint
Taints: node-role.kubernetes.io/control-plane:NoSchedule
kubectl taint node server3 node-role.kubernetes.io/control-plane:NoSchedule-
node/server3 untainted
Let’s deploy a nginx container:
kubectl run --image nginx webserver
pod/webserver created
kubectl get pod webserver
NAME READY STATUS RESTARTS AGE
webserver 1/1 Running 0 13s
This completes our k8s setup on the master node. Next, we would add worker nodes to our cluster. We would then install istio in our cluster. Stay tuned!