CentOS 7 安装kubernetes 1.24

2022-05-23  本文已影响0人  mini鱼

kubernetes 权威指南

CentOS 7 安装k8s
参考: https://www.bilibili.com/video/BV1Rt4y1s7Lc?p=1

主机配置

主机名设置

每个节点设置不同的主机名

hostnamectl set-hostname master0

设置静态ip

关闭swap分区

添加hosts

echo "192.168.0.2   master0" >> /etc/hosts

关闭防火墙、selinux

setenforce 0
systemctl disable firewalld && systemctl stop firewalld

设置ntp

echo "0 * * * * ntpdate time1.aliyun.com" >> /var/spool/cron/root

升级内核

安装内核

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
yum install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
# 安装kernel-ml版本,ml为长期稳定版,lt为长期维护版
yum --enablerepo='elrepo-kernel' install kernel-ml.x86_64

设置启动内核

grub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg

重启后验证内核升级情况

reboot
# 重启后,uname -r 查看内核版本

配置内核转发及网桥过滤

对所有节点操作:

添加内核转发及网桥过滤配置

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF

# 加载配置
sysctl -p /etc/sysctl.d/k8s.conf

加载br_netfilter模块

modporbe br_netfilter
lsmod |grep br_netfilter

安装ipset及ipvsadm

yum install ipset ipvsadm

配置ipvsadm模块加载方式

cat > /etc/sysconfig/modules/ipvs.modules << EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

# 检查
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod |grep -e ip_vs -e nf_conntrack

docker准备

下载yum源

curl -o /etc/yum.repos.d/docker-ce-aliyun.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker

yum install -y docker-ce

修改cgroup方式

cat > /etc/docker/daemon.json << EOF
{
    "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

启动docker

systemctl enable docker && systemctl start docker

安装cri-dockerd

准备golang环境

wget https://golang.google.cn/dl/go1.18.2.linux-amd64.tar.gz
tar zxvf go1.18.2.linux-amd64.tar.gz -C /usr/local/

添加环境变量:

export GOROOT=/usr/local/go
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

创建gopath

mkdir -p ~/go/bin/ ~/go/src/ ~/go/pkg/
source /etc/profile
go version

构建并安装cri-docker

git clone https://github.com/Mirantis/cri-dockerd.git
cd cri-dockerd
mkdir bin
export GOPROXY=https://proxy.golang.com.cn,direct
cd src && go get && go build -o ../bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 ../bin/cri-dockerd /usr/local/bin/cri-dockerd
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket

报错:

go: github.com/Microsoft/hcsshim@v0.8.10-0.20200715222032-5eafd1556990: Get "https://proxy.golang.org/github.com/%21microsoft/hcsshim/@v/v0.8.10-0.20200715222032-5eafd1556990.mod": dial tcp 142.251.42.241:443: i/o timeout

解决:

# 设置go代理
export GOPROXY=https://proxy.golang.com.cn,direct

安装kubernetes

配置yum源

cat > /etc/yum.repos.d/k8s-aliyun.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
EOF

安装kubernetes软件

yum install docker kubelet kubectl kubeadm

配置kubelet

cat > /etc/sysconfig/kubelet << EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF

systemctl enable kubelet

集群镜像准备

查看所需镜像

# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.24.0
k8s.gcr.io/kube-controller-manager:v1.24.0
k8s.gcr.io/kube-scheduler:v1.24.0
k8s.gcr.io/kube-proxy:v1.24.0
k8s.gcr.io/pause:3.7
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/coredns/coredns:v1.8.6

镜像下载脚本

#!/bin/bash

imageList=$(kubeadm config images list)

echo "从aliyun下载镜像"
#kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers --cri-socket unix:///var/run/cri-dockerd.sock

if [ $? -eq 0 ];then
  echo "镜像下载成功"
else
  echo "镜像下载失败,请检查"
  exit 1
fi

aliDomain="registry.aliyuncs.com/google_containers"
for image in  $imageList; do
  imageName=$(echo $image |awk -F"k8s.gcr.io/" '{print $2}')
  echo $imageName
  if [[ $imageName =~ 'coredns' ]]; then
    imageName2=$(echo $imageName |awk -F/ '{print $NF}')
    docker tag $aliDomain/$imageName2 k8s.gcr.io/$imageName
  else
    docker tag $aliDomain/$imageName k8s.gcr.io/$imageName
  fi
done

imageFile=k8s-v1.24.0.tar
echo 开始保存镜像到 $imageFile 
docker save -o $imageFile $imageList

初始化集群

kubeadm init --kubernetes-version=v1.24.0 \
--pod-network-cidr=10.10.0.0/16 \
--apiserver-advertise-address=192.168.0.2 \
--cri-socket unix:///var/run/cri-dockerd.sock

报错

journalctl -xeu kubelet 
如下有报错:
go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"master0.16f0c2159f772fce", Generat
_runtime.go:212] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.1\": Error r
ntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.1\": Error respon
ntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.1\": Error respon
rkers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-master0_kube-system(76bd90253805472e28e9f0e0f7bd6ec6)\" with C

处理:

docker pull registry.aliyuncs.com/google_containers/pause:3.1
docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
kubeadm reset --cri-socket unix:///var/run/cri-dockerd.sock
kubeadm init --kubernetes-version=v1.24.0 --pod-network-cidr=10.10.0.0/16 --apiserver-advertise-address=192.168.0.2 --cri-socket unix:///var/run/cri-dockerd.sock

集群客户端配置

根据部署程序的提示执行配置即可

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.2:6443 --token opnopd.k1gtttcotu1mwxzn \
        --discovery-token-ca-cert-hash sha256:06d41e86c5085eda93f4d994fffbd68f9f1545839cb0c9db3f8b41e47c2a92a2

集群网络准备

参考: https://projectcalico.docs.tigera.io/about/about-calico

部署operator

wget https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml --no-check-certificate
kubectl  apply -f tigera-operator.yaml

通过自定义资源方式安装

wget https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml  --no-check-certificate

# 修改第13行 cidr 的值为环境配置
 9   calicoNetwork:
 10     # Note: The ipPools section cannot be modified post-install.
 11     ipPools:
 12     - blockSize: 26
 13       cidr: 10.10.0.0/16
 14       encapsulation: VXLANCrossSubnet

kubectl  apply -f custom-resources.yaml

删除master上的污点

kubectl  taint nodes --all node-role.kubernetes.io/control-plane-

calico客户端安装

curl -L https://github.com/projectcalico/calico/releases/download/v3.23.1/calicoctl-linux-amd64 -o calicoctl
chmod +x calicoctl
cp calicoctl /usr/bin/
calicoctl get nodes

worker节点添加

复制master部署完提示的命令,添加cri配置后执行

kubeadm join 192.168.0.2:6443 --token opnopd.k1gtttcotu1mwxzn \
--discovery-token-ca-cert-hash sha256:06d41e86c5085eda93f4d994fffbd68f9f1545839cb0c9db3f8b41e47c2a92a2 \
--cri-socket unix:///var/run/cri-dockerd.sock
上一篇下一篇

猜你喜欢

热点阅读