离线部署k8s(v1.20.2)集群(二)

2021-02-01  本文已影响0人  戈羽殇雪

上一篇文章将所需要的的docker images,以及rpm包上传到指定的服务器
设备信息:

hostname ip
master01 10.196.243.20
master02 10.196.243.21
master03 10.196.243.22
nod01 10.196.243.23
nod02 10.196.243.24
nod03 10.196.243.25
nod04 10.196.243.26
nod05 10.196.243.27
vip 10.196.243.120

开始安装配置k8s集群

所有服务器开始进行设备初始化

1.hostname设定
每台设备的hostname按照之前预定的进行设定,以master01为例

hostnamectl set-hostname master01

同时将其他设备的信息记录写到/etc/hosts中

cat >> /etc/hosts << EOF
10.196.243.20   master01
10.196.243.21   master02
10.196.243.22   master03
10.196.243.23   nod01   
10.196.243.24   nod02   
10.196.243.25   nod03   
10.196.243.26   nod04   
10.196.243.27   nod05   

验证各节点mac、uuid唯一

cat /sys/class/net/eth0/address
cat /sys/class/dmi/id/product_uuid

2.禁用swap,关闭selinux,禁用Networkmanager,开启相关内核模块

cat init.sh
#!/bin/bash 
###初始化脚本####

#关闭firewalld
systemctl stop firewalld && systemctl disable firewalld

#关闭selinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

#关闭NetworkManager
systemctl stop NetworkManager && systemctl disable NetworkManager
#修改内核参数
modprobe br_netfilter
cat <<EOF >>/etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF

#修改完之后需要执行
sysctl -p 

echo "----init finish-----"

每台机器都需要执行此脚本

安装相关的yum包

cd /usr/local/src
tar zxvf yum.tar.gz 
cd yum/x86_64/7/
drwxr-xr-x. 4 root root 4096 Feb  1 03:24 base
drwxr-xr-x  4 root root  251 Feb  1 03:24 docker-ce-stable
drwxr-xr-x. 4 root root 4096 Feb  1 03:24 extras
drwxr-xr-x  4 root root  158 Feb  1 03:24 kubernetes
-rw-r--r--  1 root root  158 Feb  1 03:25 timedhosts
-rw-r--r--  1 root root  424 Jan 28 06:43 timedhosts.txt
drwxr-xr-x. 4 root root 4096 Feb  1 03:24 updates

将base、extras、docker-ce-stable、kubernetes中所有的rpm进行安装

rpm -Uvh 包名

所有安装包安装完成后,启动docker服务

mkdir -p /etc/docker
cat <<EOF >/etc/docker/daemon.json
{
  "bridge": "none",
  "iptables": false,
  "exec-opts":
    [
      "native.cgroupdriver=systemd"
    ],
  "data-root": "/opt/docker",
  "live-restore": true,
  "log-driver": "json-file",
  "log-opts":
    {
      "max-size": "100m"
    },
  "registry-mirrors":
    [
      "https://lje6zxpk.mirror.aliyuncs.com",
      "https://lms7sxqp.mirror.aliyuncs.com",
      "https://registry.docker-cn.com"
    ]
}
EOF
systemctl enable --now docker 

配置keepalived

master01:

more /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id master01
}
vrrp_instance VI_1 {
    state MASTER 
    interface ens160
    virtual_router_id 50
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.196.243.120 
    }
}

master02:

more /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id master02
}
vrrp_instance VI_1 {
    state MASTER 
    interface ens160
    virtual_router_id 50
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.196.243.120 
    }
}

master03

more /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id master02
}
vrrp_instance VI_1 {
    state MASTER 
    interface ens160
    virtual_router_id 50
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.196.243.120 
    }
}

配置好了之后,master01,02,03上启动keepalived

systemctl enable keepalived && systemctl start keepalived

导入镜像

tar zxvf images.tar.gz 
docker load < coredns.tar
docker load < etcd.tar
docker load < flannel.tar
docker load < kube-apiserver.tar
docker load < kube-controller-manager.tar
docker load < kube-proxy.tar
docker load < kube-scheduler.tar
docker load < pause.tar

导入完成之后,查看

docker images
REPOSITORY                           TAG           IMAGE ID       CREATED         SIZE
k8s.gcr.io/kube-proxy                v1.20.2       43154ddb57a8   2 weeks ago     118MB
k8s.gcr.io/kube-apiserver            v1.20.2       a8c2fdb8bf76   2 weeks ago     122MB
k8s.gcr.io/kube-controller-manager   v1.20.2       a27166429d98   2 weeks ago     116MB
k8s.gcr.io/kube-scheduler            v1.20.2       ed2c44fbdd78   2 weeks ago     46.4MB
quay.io/coreos/flannel               v0.13.1-rc1   f03a23d55e57   2 months ago    64.6MB
k8s.gcr.io/etcd                      3.4.13-0      0369cf4303ff   5 months ago    253MB
k8s.gcr.io/coredns                   1.7.0         bfe3a36ebd25   7 months ago    45.2MB
k8s.gcr.io/pause                     3.2           80d28bedfe5d   11 months ago   683kB

K8S 计算节点初始化

master01:

#使用kubeadm.config的文件进行初始化
more kubeadm.config
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.2
apiServer:
  certSANs:    #填写所有kube-apiserver节点的hostname、IP、VIP
  - master01
  - master02
  - master03
  - node01
  - node02
  - node03
  - 10.196.243.20
  - 10.196.243.21
  - 10.196.243.22
  - 10.196.243.23
  - 10.196.243.24
  - 10.196.243.25
  - 10.196.243.26
  - 10.196.243.27
  - 10.196.243.120

controlPlaneEndpoint: "10.196.243.120:6443"
networking:
  podSubnet: "10.244.0.0/16"

这里需要注意的一点是network,一定要按照网络规划去实施,否则会出现各种各样的问题
执行初始化操作

kubeadm init --config=kubeadm-config.yaml
......................
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 10.191.246.120:6443 --token f1x2tx.9oitkym0w2kvu5od \
    --discovery-token-ca-cert-hash sha256:ded7598fa692632106f5241737da2aa4db778d3dfd66a9d25026006ab9b2f0ef \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.196.243.120:6443 --token f1x2tx.9oitkym0w2kvu5od \
    --discovery-token-ca-cert-hash sha256:ded7598fa692632106f5241737da2aa4db778d3dfd66a9d25026006ab9b2f0ef 

如果初始化失败需要执行:

kubeadm reset 

若上面指令执行错误,需要加上--force 命令

加载坏境变量

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
echo "source <(kubectl completion bash)" >> ~/.bash_profile
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

添加flannel网络

使用之前的下载好的.yml文件

kubectl apply -f kube-flannel.yml

加入计算节点

最关键的一步是分发相关的证书:

#master01上的相关证书文件分发到master02,master03
more cert-main-master.sh 
USER=root # customizable
CONTROL_PLANE_IPS="10.196.243.21 10.196.243.22"
for host in ${CONTROL_PLANE_IPS}; do
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
    # Quote this line if you are using external etcd
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done

master02,master03上执行以下脚本

 more cert-other-master.sh 
USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Quote this line if you are using external etcd
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key

master02,mater03加入计算节点

需要注意的是,master02,03上必须完成docker,kubelet,kubectl等yum包的安装,并且导入相关镜像之后才能进行以下操作

  kubeadm join 10.191.246.120:6443 --token f1x2tx.9oitkym0w2kvu5od \
    --discovery-token-ca-cert-hash sha256:ded7598fa692632106f5241737da2aa4db778d3dfd66a9d25026006ab9b2f0ef \
    --control-plane 

同时导入相关的环境变量

scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile 

验证可以使用以下指令

kubectl get nodes

加入worker节点

在node01,node02,node03,node04,node05上进行操作
*同样这些节点需要完成节点初始化,docker,kubelet,kubeadm等包的完成,同样需要导入docker镜像“
执行:

kubeadm join 10.196.243.120:6443 --token f1x2tx.9oitkym0w2kvu5od \
    --discovery-token-ca-cert-hash sha256:ded7598fa692632106f5241737da2aa4db778d3dfd66a9d25026006ab9b2f0ef 

执行都无问题之后,可以在master节点上验证是否加入成功

上一篇下一篇

猜你喜欢

热点阅读