Install containerd in ubuntu 20.10 for k8s installation

Ratul Buragohain
5 min readFeb 13, 2023

--

This post is the first part of a series where we start with installing containerd as a container runtime in ubuntu 20.10 and then would go on to setup k8s and explore istio microservice framework later from absolute basics.

This is a one stop shop — we provide all necessary steps in one post which have been compiled from relevant sources. We provide links to source documentations for easy reference.

We are start with configuring the pre-requisites for containerd.

  1. Configure prerequisites

Load the OverlayFS kernel module & enable the Linux kernel’s bridge-nf-call-iptables functionality. Refer to k8s installation pre-requisites link here.

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

The “sudo modprobe overlay” command is used to load the OverlayFS kernel module, which allows the creation of a virtual filesystem that combines two separate filesystems into one. This is useful for creating a read-write layer on top of a read-only filesystem, such as when creating an overlay root filesystem for a container.

The bridge-nf-call-iptables functionality in the Linux kernel is a feature that allows the kernel to call iptables when a packet is received or sent over a bridge. This allows for the implementation of firewall rules on packets that are crossing the bridge, allowing for more secure networking.

The “sudo modprobe br_netfilter” command is used to enable the Linux kernel’s bridge-nf-call-iptables functionality.

Verify that the br_netfilter, overlay modules are loaded by running below instructions:

lsmod | grep br_netfilter
lsmod | grep overlay

lsmod lists the contents of the /proc/modules, showing what kernel modules are currently loaded.

Verify that the net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-ip6tables, net.ipv4.ip_forward system variables are set to 1 in sysctl config by running below instruction:

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

sysctl is used to modify kernel parameters at runtime. The parameters
available are those listed under /proc/sys/

We can alternately check for these variables as shown below:

cat /proc/sys/net/ipv4/ip_forward

The values should be set to 1.

2. Installing containerd

This section borrows content from getting started with containerd page in GitHub repository. Refer to the github document for more information.

We would need runc and cni plugins also to be installed.

First we would install containerd. The latest released version of containerd, as of this post, is containerd 1.6.17. Download the release to your local machine and extract it under /usr/local:

wget https://github.com/containerd/containerd/releases/download/v1.6.17/containerd-1.6.17-linux-amd64.tar.gz
sudo tar Cxzvf /usr/local containerd-1.6.17-linux-amd64.tar.gz

Since we would be using systemd as our init system — we would also need the containerd.service unit file. Download it from the link and place it in /usr/local/lib/systemd/system/containerd.service:

The /usr/local/lib/systemd/system/ directory might not be present — hence create it if necessary.


sudo mkdir -p /usr/local/lib/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable --now containerd
sudo systemctl status containerd.service

Generate containerd config file and configure containerd to use systemd as cgroup driver:

The default configuration can be generated via:

containerd config default > config.toml

Note: The /etc/containerd/ might not be present. In that case, it has to created.

sudo mkdir /etc/containerd

To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true

sudo mv config.toml /etc/containerd/config.toml

Install runc:

Containerd requires runc because it is a low-level container runtime that provides an interface for running applications inside containers. Runc is responsible for creating, starting, and managing the lifecycle of containers. It provides the core functionalities needed to run containers, such as setting up namespaces, cgroups, and other isolation mechanisms.

Containerd provides a higher-level abstraction layer on top of runc that simplifies the process of managing containers. It provides a consistent API for managing container images, creating and running containers, and monitoring their health. Containerd also includes additional features such as snapshotting and versioning of container images as well as integration with orchestration systems like Kubernetes.

Latest version of runc is runc 1.1.4. Download and install it.

wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64
sudo install -m 755 runc.amd64 /usr/local/sbin/runc
which runc

It should print: /usr/local/sbin/runc

Install cni plugins:

CNI Plugins are needed for containerd installation because they provide the networking layer for the k8s cluster.

The different CNI plugins provide different functionalities. For example, Macvlan provides Layer 2 segmentation, Static provides static IP address assignment, VLAN provides virtual LAN segmentation, Portmap provides port mapping between containers and host, Host-local provides local IP address assignment, VRF provides virtual routing and forwarding, Bridge provides Layer 2 bridging between containers and host, Tuning provides network performance tuning, Firewall provides firewall rules for traffic control, Host-device provides access to host devices from containers and Loopback enables loopback traffic.

The CNI implementations such as Weave, Calico or Flannel provide a way to configure and manage the network between nodes in a Kubernetes cluster. They provide features such as overlay networks for connecting multiple clusters together securely; policy-based routing; service discovery; load balancing; network security policies and more. Please refer to implementation providers’ respective documentations for more information.

Get the latest release of CNI plugins and install:

wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.2.0.tgz
./
./loopback
./bandwidth
./ptp
./vlan
./host-device
./tuning
./vrf
./sbr
./dhcp
./static
./firewall
./macvlan
./dummy
./bridge
./ipvlan
./portmap
./host-local

At this point containerd has been installed. We can download a container image and run it. We can use ctr tool(bundled with contained) or nerdctl to run a container as shown below:

sudo ctr images pull docker.io/library/redis:alpine
sudo ctr run docker.io/library/redis:alpine redis
13 Feb 2023 17:41:58.085 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo

Kill the redis container by pressing control+c and delete it:

sudo ctr c ls
sudo ctr c rm redis
sudo ctr images rm docker.io/library/redis:alpine

This concludes the setting up of containerd. Follow these steps to install containerd in all the cluster instances.

In the next post — we would install a k8s cluster and then go on to explore istio step by step. We would not start with the the typical bookinfo sample that comes bundled with istio. The bookinfo sample is informative — but we would start with absolute basics. Stay tuned!

--

--

Ratul Buragohain
Ratul Buragohain

Written by Ratul Buragohain

Architect,Principal Engr, polyglot programmer with a keen eye on customer delight.

No responses yet