My Kubernetes learnings for last week
As my journey into Kubernetes continues, I focused on finalizing my infrastructure and laying down the foundation of a cluster. As mentioned in prior posts, I’m building a cluster on three AWS EC2 instances.
Containerd
We finished up installing containerd. As part of the process, we had to update SystemdCgroup from false to true in the /etc/containerd/config.toml
file and instead of using Vim we used this command, which was very efficient.
`sudo sed -i ’s/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
The last part of the process involved creating a script to run on the worker nodes so that we didn’t have to retype all of the commands again.
#!/bin/bash
# Install and configure prerequisites
## load the necessary modules for Containerd
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
# Install containerd
sudo apt-get update
sudo apt-get -y install containerd
# Configure containerd with defaults and restart with this config
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
After creating the above on each node and making it executable, I ran it.
Install kubeadm, kubelet, and kubectl
We installed an earlier version so that later in the course, we can experience an upgrade to the latest.
After issuing the commands to install kubeadm, kubelet, and kubectl on the control plane, as we did earlier, we created a script on the worker nodes, made it executable, and then ran it. Here’s the script.
# Install packages needed to use the Kubernetes apt repository
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# Download the Google Cloud public signing key
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# Add the Kubernetes apt repository
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
# Install kubelet, kubeadm & kubectl, and pin their versions
sudo apt-get update
# check available kubeadm versions (when manually executing)
apt-cache madison kubeadm
# Install version 1.28.0 for all components
sudo apt-get install -y kubelet=1.28.0-1.1 kubeadm=1.28.0-1.1 kubectl=1.28.0-1.1
sudo apt-mark hold kubelet kubeadm kubectl
## apt-mark hold prevents package from being automatically upgraded or removed
Once this is done, all of our nodes are setup with the foundational components.
Initialize cluster with kubeadm
Now that all of our nodes are configured with the foundational components, it’s time to initialize the cluster with kubeadm.
sudo kubeadm init
Here’s where initialize the cluster. After entering that command, several things happen:
Pref-flight checks
- Images pulled
- Certificates generated in
/etc/kubernetes/pki/
- kubeconfig generation
admin.conf
,kubelet.conf
,controller-manager.conf
,scheduler.conf
kubeconfig files - kubelet-start environment in
/var/lib/kubelet/
- control-plane - create static pod manifests for API server, controller-manager and scheduler in
/etc/kubernetes/manifests
which kubelet is watching for pod to create on startup - addons applied - CoreDNS and kube-proxy
Now if we type service kubelet status
again, it is now in a running state.
With that, our foundation has been laid. I learned a lot this week. I’m eager to connect to the cluster and carry on this upcoming week.