简单部署3节点Kubernetes(k8s)集群之calico网

2021-01-28  本文已影响0人  Hello泽泽

0.版本

Docker: 20.10.2
kubernetes: v1.19.0
calico: v3.17.1

1.云主机

规格: ecs.c6.large
配置: 2C,4G,40G
系统: CentOS 7.9 x64
数量: 3台

2.主机名

$ cat /etc/hosts | grep k8s
192.168.10.216  k8s-master
192.168.10.217  k8s-work1
192.168.10.218  k8s-work2

3.系统配置

#关闭selinux
$ cat /etc/selinux/config  | grep SELINUX=disabled
SELINUX=disabled

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭 swap 分区
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# 内核参数
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p

# 重启主机
reboot

4. 部署 Docker

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum install -y docker-ce docker-ce-cli containerd.io
systemctl start docker
systemctl enable docker

$ docker -v
Docker version 20.10.2, build 2291f61

配置镜像加速器

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://zflya4r5.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

6.部署 Kubernetes

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

yum clean all
yum makecache fast

yum -y install yum-utils device-mapper-persistent-data lvm2
yum install -y kubelet-1.19.0-0 kubeadm-1.19.0-0 kubectl-1.19.0-0 --disableexclude=kubernetes
systemctl start kubelet 
systemctl enable kubelet

7. 集群Master节点初始化 Master节点

# --pod-network-cidr=10.244.0.0/16 后面配置 calico 要使用

[root@k8s-master ~]#  kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.19.0 --pod-network-cidr=10.244.0.0/16

配置 kubeconfig 认证

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp /etc/kubernetes/admin.conf $HOME/.kube/config

查看运行的POD

[root@k8s-master ~]# kubectl -n kube-system get pod
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-6d56c8448f-kcbwf             0/1     Pending   0          3m15s
coredns-6d56c8448f-v8wwf             0/1     Pending   0          3m15s
etcd-k8s-master                      1/1     Running   0          3m26s
kube-apiserver-k8s-master            1/1     Running   0          3m26s
kube-controller-manager-k8s-master   1/1     Running   0          3m26s
kube-proxy-hxxzn                     1/1     Running   0          3m15s
kube-scheduler-k8s-master            1/1     Running   0          3m26s

8.集群Work节点加入 Work节点

# Master节点获取,Work节点连接接入信息
[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.10.216:6443 --token w6up3y.5c30uxrdqywzc5ri     --discovery-token-ca-cert-hash sha256:4afed2d94a56b8706ea8b290c2844c1bcb98be6a5097ed06c39b1fa1f311daae

# work1 节点操作
[root@k8s-work1 ~]# kubeadm join 192.168.10.216:6443 --token w6up3y.5c30uxrdqywzc5ri     --discovery-token-ca-cert-hash sha256:4afed2d94a56b8706ea8b290c2844c1bcb98be6a5097ed06c39b1fa1f311daae

# work2 节点操作
[root@k8s-work2 ~]# kubeadm join 192.168.10.216:6443 --token w6up3y.5c30uxrdqywzc5ri     --discovery-token-ca-cert-hash sha256:4afed2d94a56b8706ea8b290c2844c1bcb98be6a5097ed06c39b1fa1f311daae

# master节点查看集群节点
[root@k8s-master ~]# kubectl get node
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   NotReady   master   47m    v1.19.0
k8s-work1    NotReady   <none>   118s   v1.19.0
k8s-work2    NotReady   <none>   40s    v1.19.0

9.安装 calico 网络插件

# 下载部署yaml文件
[root@k8s-master ~]# curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -o calico.yaml

# 修改 CALICO_IPV4POOL_CIDR,上面集群初始化定义的
vim calico.yaml
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"

# 部署
[root@k8s-master ~]# kubectl apply -f calico.yaml


然后报错了,解决如下:

[root@k8s-master ~]# kubectl -n kube-system get pod | grep calico
calico-kube-controllers-5b49c7597b-t78hm   0/1     Error               5          4m25s
calico-node-ggwgk                          0/1     Running             3          4m25s
calico-node-hlkdd                          0/1     Running             3          4m25s
calico-node-r9q5t                          0/1     Running             3          4m25s

报错1:

calico-node 服务 报错信息如下:

calico-kube-controllers 报错信息如下:

报错1-解决:

配置ETCD服务地址与SSL证书相关

# ETCD 地址
ETCD_ENDPOINTS="https://192.168.10.216:2379"
sed -i "s#.*etcd_endpoints:.*#  etcd_endpoints: \"${ETCD_ENDPOINTS}\"#g" calico.yaml
sed -i "s#__ETCD_ENDPOINTS__#${ETCD_ENDPOINTS}#g" calico.yaml

# ETCD 证书信息
ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`

# 替换修改
sed -i "s#.*etcd-ca:.*#  etcd-ca: ${ETCD_CA}#g" calico.yaml
sed -i "s#.*etcd-cert:.*#  etcd-cert: ${ETCD_CERT}#g" calico.yaml
sed -i "s#.*etcd-key:.*#  etcd-key: ${ETCD_KEY}#g" calico.yaml

sed -i 's#.*etcd_ca:.*#  etcd_ca: "/calico-secrets/etcd-ca"#g' calico.yaml
sed -i 's#.*etcd_cert:.*#  etcd_cert: "/calico-secrets/etcd-cert"#g' calico.yaml
sed -i 's#.*etcd_key:.*#  etcd_key: "/calico-secrets/etcd-key"#g' calico.yaml

sed -i "s#__ETCD_CA_CERT_FILE__#/etc/kubernetes/pki/etcd/ca.crt#g" calico.yaml
sed -i "s#__ETCD_CERT_FILE__#/etc/kubernetes/pki/etcd/server.crt#g" calico.yaml
sed -i "s#__ETCD_KEY_FILE__#/etc/kubernetes/pki/etcd/server.key#g" calico.yaml

sed -i "s#__KUBECONFIG_FILEPATH__#/etc/cni/net.d/calico-kubeconfig#g" calico.yaml

报错2

calico-kube-controllers 报错信息如下:

Failed to start error=failed to build Calico client: could not initialize etcdv3 client: open /calico-secrets/etcd-cert: permission denied

[root@k8s-master ~]# kubectl -n kube-system logs -f  calico-kube-controllers-5b49c7597b-n2wwl
2021-01-28 04:52:53.331 [INFO][1] main.go 88: Loaded configuration from environment config=&config.Config{LogLevel:"info", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", DatastoreType:"etcdv3"}
2021-01-28 04:52:53.331 [FATAL][1] main.go 101: Failed to start error=failed to build Calico client: could not initialize etcdv3 client: open /calico-secrets/etcd-cert: permission denied

报错2解决:

参考: https://github.com/kubernetes/website/issues/25587
defaultMode: 0400 修改成 defaultMode: 0040

      volumes:
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0040
            #defaultMode: 0400

calico网络插件-运行成功

[root@k8s-master ~]# kubectl -n kube-system get pod  | grep calico
calico-kube-controllers-6c749f5bb6-8gf6h   1/1     Running             0          2m54s
calico-node-9cxs4                          1/1     Running             0          38m
calico-node-l9vph                          1/1     Running             0          38m
calico-node-swxq9                          1/1     Running             0          38m

报错3. coredns启动报错


[root@k8s-master ~]# kubectl -n kube-system get pod | grep coredns
coredns-6d56c8448f-c6x7h                   0/1     ContainerCreating   0          15s
coredns-6d56c8448f-prs5c                   0/1     ContainerCreating   0          15s

coredns报错信息:

Warning FailedCreatePodSandBox 14s (x4 over 17s) kubelet, k8s-work2 (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "266213ee3ba95ea42c067702990b81f6b5ee1857c6bdee6d247464dfb0a85dc7" network for pod "coredns-6d56c8448f-c6x7h": networkPlugin cni failed to set up pod "coredns-6d56c8448f-c6x7h_kube-system" network: could not initialize etcdv3 client: open /etc/kubernetes/pki/etcd/server.crt: no such file or directory

报错3.解决

原因: 找不到ssl证书

# 配置主机密钥对信任,从master节点同步ssl证书到work节点
ssh-keygen -t rsa 
ssh-copy-id root@k8s-work1
ssh-copy-id root@k8s-work1
scp -r /etc/kubernetes/pki/etcd root@k8s-work1:/etc/kubernetes/pki/etcd
scp -r /etc/kubernetes/pki/etcd root@k8s-work2:/etc/kubernetes/pki/etcd

全部POD运行正常

[root@k8s-master ~]# kubectl -n kube-system get pod
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6c749f5bb6-8xsc5   1/1     Running   0          14m
calico-node-5knwk                          1/1     Running   0          17m
calico-node-qrcw4                          1/1     Running   0          17m
calico-node-t9cxh                          1/1     Running   0          17m
coredns-6d56c8448f-c6x7h                   1/1     Running   0          12m
coredns-6d56c8448f-prs5c                   1/1     Running   0          12m
etcd-k8s-master                            1/1     Running   1          24h
kube-apiserver-k8s-master                  1/1     Running   1          24h
kube-controller-manager-k8s-master         1/1     Running   3          24h
kube-proxy-6kf4f                           1/1     Running   1          23h
kube-proxy-gfg2n                           1/1     Running   1          23h
kube-proxy-hxxzn                           1/1     Running   1          24h
kube-scheduler-k8s-master                  1/1     Running   3          24h
上一篇下一篇

猜你喜欢

热点阅读