install k8s by kubeadmin

2021-12-13  本文已影响0人  Charles_linzc

install and initialize kubeadm tool

in this article, we'll install a k8s cluster by kubeadmin tool. First, let's prepare a vitual machine by oracle virtualbox, with resource limited below:
cpu: 2core 4g disk : 20g
why we use kubeadm:

  1. the simplist way to install a k8s cluster.
  2. easier for automation setting up and test on our application.

there are some requriment if we follow the guide:

  1. One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
  2. 2 GiB or more of RAM per machine--any less leaves little room for your apps.
  3. At least 2 CPUs on the machine that you use as a control-plane node.
  4. Full network connectivity among all machines in the cluster.
  5. Unique hostname, MAC address, and product_uuid for every node
  6. Certain ports are open on your machines
  7. Swap disabled. You MUST disable swap in order for the kubelet to work properly

After install the VM, let's check this requriement,

  1. we use ubuntu 20.04 for os, so it's compatible.
  2. we offer 4Gib,20g disk for the first one vm, it's satisfied.
  3. but the core of cpu is 1 in default, let change it by opening the setting dialog of vm and set it to 2:


    image.png
  4. all the vms will in the same virtual network.
  5. after restart the vm, login it, and check the hostname MAC and product uuid
    hostname: charleslin1 it was setted when install the vm.
    MAC: get it by using the cmd "ip link"
    product_uudi check it by command "sudo cat /sys/class/dmi/id/product_uuid"
  6. let iptable see bridged traffic
    we need the br_netfilter module is loaded, first we use lsmod cmd to check the module:
    lsmod | grep br_netfilter* br_netfilter is module for bridge firewall , To load it explicitly call sudo modprobe br_netfilter.
    As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

Install container runtime
k8s will usr CRI to interface with contianer runtime, we need to install a container runtime , here we use docker ad runtime.
refer to docker installation for unbuntu, folow the steps and install it.
after installation,we may check the version we installed by docker version

$ docker version
image.png

after docker installing, we i'll install these packages on the VM:

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

second, we'll download Google Cloud public signing key, it 'll be used for verification when downloading utils mentioned before.
Cos, there is error to connect to google's server, we need dowload https://download.docker.com/linux/ubuntu/gpg from manully( open a proxy which can visit foregin website), open the link in chrome, and drag it into our VM.
now, move it ot the destination folder by:

sudo cp apt-key.gpg /usr/share/keyrings/kubernetes-archive-keyring.gpg

then we need to add a mirror to create sourcelist file

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Then, we can install all tools nomally

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

when we have finished those tasks above, we need configure cgroup driver for docker and k8s. The Container runtimes page explains that the systemd driver is recommended for kubeadm based setups instead of the cgroupfs driver, because kubeadm manages the kubelet as a systemd service.
a. check the Cgroup v2 is installed in the server(Ubuntu 20.04 kernel 5.4 use v2 as default)

grep cgroup2 /proc/filesystems
image.png

In result, the cgroup2 shows it's used in system.
b. check the cgroup config of docker, by docker info


image.png

it's obviously, the cgroupfs is used, in k8s, we need to change the driver to systemd by command below:

cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

then, restart the docker by:

sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

check the docker info again:


image.png

Before initialize the kubeadm, we need stop the sap

 $ swapoff -a  
$sudo sed -i 's/.*swap.*/#&/' /etc/fstab
$ cat /etc/fstab

in the opened fstab file, comment out the /sap line:


image.png

the reboot the server by

$ reboot

sometime we also need to config cgroup drvier for kubelet. but after v1.22 , if user doesn't set cgroupDriver field under KubeletConfiguration, kubeadm will default it to systemd. it means we do nothing with kubectl. now start the initializing of kubeadm

kubeadm init 

but, error occurs when pulling image, use be cmd below to check the image we need:

$ kubeadm config images list
image.png

then pull thoses images by mirror site.

sudo docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
sudo docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
sudo docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
sudo docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
sudo docker pull registry.aliyuncs.com/google_containers/pause:3.6
sudo docker pull registry.aliyuncs.com/google_containers/etcd:3.5.1-0
sudo docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.6

Notes: cos network issues, please use domestic mirror site " registry.aliyuncs.com/google_containers" as prefix.
After the finish of all download, we use docker tag to chang the image's name.

sudo docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0  k8s.gcr.io/kube-apiserver:v1.23.0
sudo docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0  k8s.gcr.io/kube-controller-manager:v1.23.0
sudo docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0  k8s.gcr.io/kube-scheduler:v1.23.0
sudo docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0  k8s.gcr.io/kube-proxy:v1.23.0
sudo docker tag registry.aliyuncs.com/google_containers/pause:3.6  k8s.gcr.io/pause:3.6
sudo docker tag registry.aliyuncs.com/google_containers/etcd:3.5.1-0  k8s.gcr.io/etcd:3.5.1-0
sudo docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.6  k8s.gcr.io/coredns/coredns:v1.8.6

last is to initialize the kubeadm again by:

sudo kubeadm init
image.png

Now, we finished all tasks with kubeadmin initializtion, we are ready to install a k8s cluster now.

Create a cluster with kubeadm

after kubadmin initializtion, according to the comments in console, we'll run those comands below:


image.png
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

thoses configuration including certification and location info for connecting to k8s server. after that we can try kubectl comand

kubectl get  node
image.png

Now, we have kubectl installed, but without network. use comand below to check node info:

kubectl describe node  k8snode1-virtualbox
image.png

Refer to infomation abuve, the network for the node isn't ready, we need to install a network plugin.

  1. config NetworkManager for calico.
    Create the following configuration file at /etc/NetworkManager/conf.d/calico.conf to prevent NetworkManager from interfering with the interfaces:
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:wireguard.cali
  1. close firewall by comand below
sudo systemctl stop firewalld
  1. download the Calico networking manifest
curl https://docs.projectcalico.org/manifests/calico.yaml -O
  1. apply the calico.yaml by comands below
kubectl apply -f calico.yaml
image.png

after 4, we may check the pods in kubesystem, calico-kube controller is downloaded and installed. and then when we check the node status, is ready now.


image.png

5.(optional) we may install calicoctl comand line tool to manage calico resources and perform administritive funcitons.
Use the following command to download the calicoctl binary

curl -o calicoctl -O -L  "https://github.com/projectcalico/calicoctl/releases/download/v3.21.2/calicoctl" 

Till here, we finish the network plugin installing.

Schedule pods on the control panel node

By default, your cluster will not schedule Pods on the control-plane node for security reasons. If you want to be able to schedule Pods on the control-plane node, for example for a single-machine Kubernetes cluster for development, run:

kubectl taint nodes --all node-role.kubernetes.io/master-
image.png

This will remove the ode/k8snode1-virtualbox taint from any nodes that have it, including the control-plane node, meaning that the scheduler will then be able to schedule Pods everywhere.

after this section, a standalone k8s installation is finished, we may try a redis pod, after apply the configuration, we can get pod on server:


image.png

Add a second node

Create a second VM, Please check the network to make sure the conntion between vms: Any node shuld access other nodes without any limit. here we configure the vm with NAT service in oracle viture box:


image.png

After creation, we install the docker as runtime on the second server ,and confgure cgroup as depicted in prior section, follow step before till initializing the kubeadm.
An easy way to create the second VM is to generate from snapshoot of VM1, then change the hostname by:

vi /etc/hostname          #change the server name
vi /hosts                       #change the server name in  ip mapping

We also need to close swap on second node by modify the /etc/fstab as above.

sudo sed -i 's/.*swap.*/#&/' /etc/fstab

and, one more task is start kubectl:

systemctl enable kubelet.service
systemctl start kubelet.service
systemctl status kubelet.service

after that, we login into first vm(the one we install the control panel on) and create an join token for second node by:

kubeadm token create
image.png

record the token, then we need generate the ca-cert-hash on fist VM still.

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
image.png

after receiving token and ca cert hash, we login the second VM and join it into cluster:

kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

here <control-plane-host> is IP of fist VM and port is 6443, so we get comand like below

kubeadm join --token iln84z.h2v1tjez4nvc20p5 10.0.2.15:6443 --discovery-token-ca-cert-hash sha256:2617192f5a967c78b72d657f05b70f01a99d30c2dac8d32465bdb8ba4ea605cc

kubeadm join 10.0.2.9:6443 --token ybqdpp.vh0sntajqehf5rw9
--discovery-token-ca-cert-hash sha256:efd185e1fc2ee987d97fd34c76b289ed8b0f06e8d4e71ca0156c5410d0ae5e1c

execute it in comand line tool of second vm:

image.png
Congratulations! you're succesful if see the same tips in console.
Let's check the nodes and pods in control panel node , input kubectl get node, you'll get two nodes shows, worknode is with a none role.
image.png
Input kubectl get pods -A, you'll see all pods in system namespace(we hanven't deployed any cutom pod in default namespace), you'll see calico-* for network, coredns-* for cluste dns, and other import conponent in admin node:
image.png

by the same way, we may add more work-node into cluster.

上一篇下一篇

猜你喜欢

热点阅读