How to deploy Kubernetes with Kubeadm and containerd – The New Stack

There’s no easy way to say this, but Kubernetes is a challenge. And while at one time it was actually quite simple to deploy a Kubernetes cluster on bare metal (thanks, in part, to Docker), it’s not as simple as it used to be. To complicate matters further, there are an almost endless amount of paths to get the platform up and running.

Which do you choose? The answer to this question depends on what you’re doing with Kubernetes, the platform you plan to deploy it on, and your operating system of choice.

One method of deploying a Kubernetes cluster is to kubeadm (a tool to speed up your deployment) and container (a container runtime). This is the method I want to illustrate here. Go through this process and you will end up with a working Kubernetes cluster. I’ll limit it to one master and one node (for simplicity), but you can deploy as many nodes as you want.

To do this, you will need at least two machines (one for the master and one for the node). I’m going to demo it on my server of choice, Ubuntu 20.04). Each machine must have at least 2 GB of RAM and the master must have at least two processors.

Set host names and host files

The first thing we are going to do is set the hostnames for each machine. First, login to your master and run the command:

sudo hostnamectl set-hostname kubemaster

Then edit the hosts file with the command:

sudo nano /etc/hosts

In this file, add the following at the bottom:

Where IP_ADDRESS is the IP address of each machine. Save and close the file.

Connect to the node machine and set the hostname with:

sudo hostnamectl set-hostname kubenode1

Edit the /etc/hosts file the same way you did on the master (using the same settings).

Install the necessary software

On both machines, you will need to install some software. First, connect to the master and run an update/upgrade with the commands:

sudo apt update

sudo apt upgrade -y

If the kernel is upgraded, be sure to reboot the machine.

Once the upgrade is complete, install the first dependencies with:

sudo apt install curl apt-transport-https -y

Then add the necessary GPG key with the command:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

Add the Kubernetes repository with:

echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Update apt:

sudo apt update

Install the required software with the command:

sudo apt -y install vim git curl wget kubelet kubeadm kubectl

Finally place kublet, kubeadmand kubectl pending with:

sudo apt-mark hold kubelet kubeadm kubectl

Start and activate the kublet service with:

sudo systemctl enable --now kubelet

Repeat this process on kubenode1.

Disable Exchange

Next we need to disable swap on kubemaster and kubenode1. Open the fstab file to edit with:

sudo nano /etc/fstab

In this file, comment out the line:

This line should now look like:

Save and close the file. You can either reboot to disable swap, or just run the following command to complete the job:

sudo swapoff -a

Enable kernel modules and change settings in sysctl

Next, we need to enable two kernel modules and add some parameters to sysctl. First, enable overlay and br_netfilter module with:

sudo modprobe overlay

sudo modprobe br_netfilter

Change the sysctl parameters by opening the necessary file with the command:

sudo nano /etc/sysctl.d/kubernetes.conf

Look for the following lines and make sure they are defined as you see below:

Save and close the file. Reload sysctl with:

sudo sysctl --system

Make sure to take care of the above on kubemaster and kubenode1.

Install container

We will now install the container execution engine. This is done on both machines. The first thing to do is to configure the persistent loading of the necessary elements container mods. This is done with the following command (which you must copy and paste as is):

Again, we reload the configuration with:

sudo sysctl --system

Install the necessary dependencies with:

sudo apt install curl gnupg2 software-properties-common apt-transport-https ca-certificates -y

Add the GPG key with:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the required repository with:

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

To install container with the commands:+-

sudo apt update

sudo apt install containerd.io -y

Switch to the root user with:

sudo su -

Create a new directory for container with:

mkdir -p /etc/containerd

Generate the configuration file with:

containerd config default>/etc/containerd/config.toml

Exit the root user with:

exit

Restart container with the command:

sudo systemctl restart containerd

To permit container to launch at startup with:

sudo systemctl enable containerd

Finally, you need to create a new directory to host a configuration file and give it the appropriate permissions, which is done with the following commands:

mkdir -p $HOME/.kube

sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Initialize the master node

Go to kube master and pull down the necessary container images with:

sudo kubeadm config images pull

Now using the kube master The IP address initializes the master node with:

sudo kubeadm init --pod-network-cidr=IP/16

Where IP is the IP address of kube master. During the initialization command, you will be presented with something like:

sudo kubeadm join 192.168.1.100:6443 --token 0dt0kt.h4i71m34tbfqup83 --discovery-token-ca-cert-hash sha256:c74be4fd295c172ba0fd6bdae870a834b051327c45fa46cc9d738e74f5de82a0

The above command is what you run next kubenode1 to join it to the cluster. The join should be done very quickly. When done, return to kubemaster and run the command:

kubectl get nodes

You should see your master and node listed. Congratulations, you have successfully deployed a Kubernetes cluster and can use it for development purposes.

I wouldn’t suggest using it for production as it’s too small to scale and we haven’t considered security (like using SSL certificates). But it’s a great way to practice cluster deployment and a viable introduction to Kubernetes development.

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.


Source link

Steven L. Nielsen