Run a Kubernets cluster on Ubunt
The preparatory work
- Close the swap partion
We can usesudo vim /etc/fstab
to modify.Comment out the line swap.Then reboot server.We can usefree -h
to verify success. - Configure keyless login
- Use
ssh-keygen
to initialize the local public and private keys on the master and worker nodes - Use
ssh-copy-id
to send the master local public key to the worker.I usessh-copy-id -i .ssh/id_rsa.pub yoke@192.168.139.140
to send. - We can use
ssh yoke@192.168.139.140
to try connecting to see if you need a password.
- Use
Install the docker
Just a command.sudo apt install docker.io
.We need to do some configuration for Docker.
sudo vim /etc/docker/daemon.json.copy the following content into the json file.
{
"registry-mirrors": [
"https://dockerhub.azk8s.cn",
"https://reg-mirror.qiniu.com",
"https://quay-mirror.qiniu.com"
],
"exec-opts": [ "native.cgroupdriver=systemd" ]
}
Then we must reboot docker.sudo systemctl daemon-reload && sudo systemctl restart docker
Install kubernets
First,we use the following commands to install kubernets tools.
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
The master and worker need to do all of the above operations.
Config master node
Bootstrap master node
We must as root before we execute the following commands.First,we can use kubeadm init
to bootstrap master node.But domestic cannot pull some mirror image, so using --image-repository
to configure the domestic source.The --apiserver-advertise-address
points to the master node ip address.--pod-network-cidr
is used to config flannel network.At the same time,we can use kubeadm config images list
to list all required images.
kubeadm init --apiserver-advertise-address=192.168.139.140 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16
However,the above command may report the following error.This is because there is no the coredns image on Aliyun.We had to manually change the source for the download.But the downloaded version must be correspond to the Kubernets version.
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0: output: Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
We can use kubeadm config images list
to list all required images.
k8s.gcr.io/kube-apiserver:v1.21.1
k8s.gcr.io/kube-controller-manager:v1.21.1
k8s.gcr.io/kube-scheduler:v1.21.1
k8s.gcr.io/kube-proxy:v1.21.1
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
Then docker pull registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.0
is used to download the coredns.,docker tag registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
is used to unify.
Finaly run kubeadm init --apiserver-advertise-address=192.168.139.141 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16
.If all is well, the following should appear.
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.139.141:6443 --token 1t19hg.gtdbd7exc6lue4e2 \
--discovery-token-ca-cert-hash sha256:801877496bf306b16f9af651907f1b739918b87f1cffb13cedb58092873e168e
As indicated above, we need to execute:export KUBECONFIG=/etc/kubernetes/admin.conf
.
Deploy the Flannel network
It also just a command.kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
At this point, the configuration of the master node is complete.
Config worker node
Executes an instruction that was just displayed after the master node was started.
kubeadm join 192.168.139.141:6443 --token 1t19hg.gtdbd7exc6lue4e2 \
--discovery-token-ca-cert-hash sha256:801877496bf306b16f9af651907f1b739918b87f1cffb13cedb58092873e168e
If the log is as follows, the configuration of the worker node is successful.
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Finally, on the master node, we can verify the success by executing kubectl get nodes
.
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 12m v1.21.1
worker1 Ready <none> 89s v1.21.1