软件K8s

K3S 离线安装部署高可用集群

2021-10-18  本文已影响0人  枕梦_a280

一、服务器环境及部署图

二、服务器初始化脚本

执行范围:所有主机
执行完成后重启服务器。

#!/bin/sh
# 服务器初始化脚本(所有主机执行)

# 设置当前主机ip地址环境(带d的为开发用,带p的为预发布用,带t的为测试用,带a的为生产用)
IP_ENV=t

# 关闭防火墙、selinux
sudo systemctl stop firewalld && sudo systemctl disable firewalld
sudo systemctl stop iptables && sudo systemctl disable iptables
sudo setenforce 0
sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
sudo sed -i 's/SELINUX=permissive/SELINUX=disabled/g' /etc/selinux/config

# 禁用swap
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# 关闭networkmanager
sudo systemctl stop NetworkManager && sudo systemctl disable NetworkManager

# 修改时区语言
sudo ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
sudo sh -c 'echo 'LANG="en_US.UTF-8"' >> /etc/profile'
source /etc/profile

# 修改最大文件打开数
# sudo sh -c ''
sudo sh -c 'echo '* soft nofile 65535' >> /etc/security/limits.conf'
sudo sh -c 'echo '* hard nofile 65535' >> /etc/security/limits.conf'

# 启用ipvs内核模块
sudo /sbin/modprobe ip_vs_dh
sudo /sbin/modprobe ip_vs_fo
sudo /sbin/modprobe ip_vs_ftp
sudo /sbin/modprobe ip_vs
sudo /sbin/modprobe ip_vs_lblc
sudo /sbin/modprobe ip_vs_lblcr
sudo /sbin/modprobe ip_vs_lc
sudo /sbin/modprobe ip_vs_nq
sudo /sbin/modprobe ip_vs_ovf
sudo /sbin/modprobe ip_vs_pe_sip
sudo /sbin/modprobe ip_vs_rr
sudo /sbin/modprobe ip_vs_sed
sudo /sbin/modprobe ip_vs_sh
sudo /sbin/modprobe ip_vs_wlc
sudo /sbin/modprobe ip_vs_wrr

# 将桥接的IPv4流量传递到iptables的链:
# 如果有配置,则修改
sudo sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sudo sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sudo sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sudo sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sudo sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sudo sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sudo sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf
# 可能没有,追加
sudo sh -c 'echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf'
sudo sh -c 'echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf'
sudo sh -c 'echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf'
sudo sh -c 'echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf'
sudo sh -c 'echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf'
sudo sh -c 'echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf'
sudo sh -c 'echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf'
# 此参数为elasticsearch运行需要调整的参数
sudo sh -c 'echo "vm.max_map_count = 655300" >> /etc/sysctl.conf' 
# 使得上述配置生效
sudo sysctl -p

# 修改主机名
# 获取ip后两组数字(带d的为开发用,带p的为预发布用,带t的为测试用,带a的为生产用)
ipNumlast2=`ip addr|egrep '[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+'|grep -v '127'|awk -F'[ ]+' '{print $3}'|cut -d / -f 1|cut -d . -f 3-4|tr "\." "${IP_ENV}"`
# 设置hostname
sudo hostnamectl set-hostname $ipNumlast2.cluster

# 修改yum源
# 我默认使用163的源
sudo rm -rf /etc/yum.repos.d/*
sudo sh -c 'cat > /etc/yum.repos.d/163.repo <<EOF
[base]
name=CentOS-$releasever - Base - 163.com
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
baseurl=http://mirrors.163.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7

#released updates
[updates]
name=CentOS-$releasever - Updates - 163.com
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
baseurl=http://mirrors.163.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7

#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - 163.com
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
baseurl=http://mirrors.163.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7

#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - 163.com
baseurl=http://mirrors.163.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7
EOF'

# 安装一些基本工具和软件
sudo yum -y install wget vim lsof net-tools chrony

# 配置时间同步
sudo sed -i 's/^server.*iburst$//' /etc/chrony.conf
sudo sh -c 'cat >> /etc/chrony.conf <<EOF
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
server ntp3.aliyun.com iburst
server ntp4.aliyun.com iburst
server ntp5.aliyun.com iburst
server ntp6.aliyun.com iburst
server ntp7.aliyun.com iburst
EOF'
sudo systemctl start chronyd
sudo systemctl enable chronyd
sudo chronyc sources -v

三、将IP和对应的主机名写入hosts文件

执行范围: 所有主机

sudo sh -c 'cat >>/etc/hosts<<EOF
10.2.2.10 2t10 2t10.cluster
10.2.2.11 2t11 2t11.cluster
10.2.2.12 2t12 2t12.cluster
10.2.2.13 2t13 2t13.cluster
10.2.2.14 2t14 2t14.cluster
10.2.2.15 2t15 2t15.cluster
10.2.2.16 2t16 2t16.cluster
EOF'

四、docker 的安装与配置

安装方式: 二进制离线安装
执行范围: 所有主机

1. 下载docker二进制安装包
https://download.docker.com/linux/static/stable/

2. 上传到服务器并解压缩
[demo@2t16 docker]$ ls # 查看
docker-20.10.9.tgz
[demo@2t16 docker]$ tar -xvf docker-20.10.9.tgz  # 解压缩
docker/
docker/containerd-shim-runc-v2
docker/dockerd
docker/docker-proxy
docker/ctr
docker/docker
docker/runc
docker/containerd-shim
docker/docker-init
docker/containerd

3. 将docker二进制程序文件拷贝到指定位置
[demo@2t16 docker]$ sudo mv docker/* /usr/bin/  # 拷贝
[demo@2t16 docker]$ ls /usr/bin/docker*   # 查看
/usr/bin/docker  /usr/bin/dockerd  /usr/bin/docker-init  /usr/bin/docker-proxy

4. 创建配置文件
[demo@2t16 docker]$ sudo mkdir /etc/docker # 先创建一个配置文件目录
[demo@2t16 docker]$ sudo sh -c 'cat >/etc/docker/daemon.json<<EOF  # 将配置写入到文件,请删掉这些注释
{
    "oom-score-adjust": -1000,
    "data-root": "/home/qfsystem/docker-data", # docker数据目录 自定义
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m",
        "max-file": "10"
    },
    "exec-opts": [
        "native.cgroupdriver=cgroupfs"
    ],
    "max-concurrent-downloads": 10,
    "max-concurrent-uploads": 10,
    "bip": "182.10.0.1/16", # 默认网段 自定义 
    "insecure-registries": [
        "0.0.0.0/0" # 私有仓库地址
    ],
    "registry-mirrors": [
        "https://yd48ur9i.mirror.aliyuncs.com" # 镜像加速器地址
    ],
    "storage-driver": "overlay2",
    "storage-opts": [
        "overlay2.override_kernel_check=true"
    ]
}
EOF'

5. 创建systemd启动启动脚本
sudo sh -c 'cat>/etc/systemd/system/docker.service<<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
LimitNOFILE=infinity
LimitNPROC=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF'

6. 启动并查看状态
[demo@2t14 docker]$ sudo systemctl daemon-reload   # 使配置生效
[demo@2t14 docker]$ sudo systemctl start docker    # 启动docker
[demo@2t14 docker]$ sudo systemctl enable docker  # 配置docker开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /etc/systemd/system/docker.service.
[demo@2t14 docker]$ sudo systemctl status docker   # 查看docker运行状态
● docker.service - Docker Application Container Engine
   Loaded: loaded (/etc/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-10-15 23:24:23 CST; 15s ago
     Docs: https://docs.docker.com
 Main PID: 21710 (dockerd)
   CGroup: /system.slice/docker.service
           ├─21710 /usr/bin/dockerd
           └─21740 containerd --config /var/run/docker/containerd/containerd.toml --log-level info
... ...

7. 检查docker配置是否生效
[demo@2t16 docker]$ sudo docker info
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 20.10.9
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 5b46e404f6b9f661a205e28d59c982d3634148f8
 runc version: v1.0.2-0-g52b36a2d
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.4.246-1.el7.elrepo.x86_64
 Operating System: CentOS Linux 7 (Core)
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 470.9MiB
 Name: 2t16.cluster.ydca
 ID: CRUJ:Y2DX:A3A5:MKKK:PJIC:QDFT:2NDO:XVQ5:APAW:RMSJ:4K5Z:KPXY
 Docker Root Dir: /home/demo/docker-data
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  0.0.0.0/0
  127.0.0.0/8
 Registry Mirrors:
  https://yd48ur9i.mirror.aliyuncs.com/
 Live Restore Enabled: false
 Product License: Community Engine

五、启动一个5.7版本mysql数据库容器

作用:用于为k3s提供存储,k3s支持除etcd外的集群数据存储方式
执行范围: 10.2.2.10 服务器主机
说明:选用5.7版本是因为该版本是rancher官方推荐。
第5步中,只需要对server节点所在的IP创建用户并授权就可以了。

1. 拉取mysql5.7镜像
[demo@2t10 ~]$ sudo docker pull mysql:5.7

2. 编写启动脚本
[demo@2t10 ~]$ cat >/home/demo/start-k3s-mysql.sh<<EOF
#!/bin/sh
set -x
set -e

sudo docker run \
--restart=always \
--name mysql-service \
-v /home/demo/k3s-mysql-data:/var/lib/mysql \
-p 13306:3306 \
-e MYSQL_ROOT_PASSWORD=root \
-d mysql:5.7 \
--character-set-server=utf8mb4 \
--collation-server=utf8mb4_general_ci \
--lower_case_table_names=1 \
--skip-name-resolve=1 \
--max_connections=1000 \
--wait_timeout=31536000 \
--interactive_timeout=31536000 \
--innodb_large_prefix=on \
--default-time-zone='+8:00'
EOF

3. 脚本赋权并运行脚本
[demo@2t10 ~]$ chmod +x start-k3s-mysql.sh
[demo@2t10 ~]$ ./start-k3s-mysql.sh 

4. 查看容器运行情况
[demo@2t10 ~]$ sudo docker ps -a
CONTAINER ID   IMAGE       COMMAND                  CREATED              STATUS          PORTS                                NAMES
00288eac50a8   mysql:5.7   "docker-entrypoint.s…"   About a minute ago   Up 59 seconds   33060/tcp, 0.0.0.0:13306->3306/tcp   mysql-service

5. 进入容器操作
[demo@2t10 ~]$ sudo docker exec -it mysql-service /bin/sh
# mysql -uroot -p               # ---- > 登录
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.35 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> create database k3s default charset utf8mb4;     # ---- > 创建k3s库
Query OK, 1 row affected (0.00 sec)

mysql> create user k3s@'10.2.2.10' identified by 'testdbk3s';     # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)

mysql> create user k3s@'10.2.2.11' identified by 'testdbk3s';     # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)

mysql> create user k3s@'10.2.2.12' identified by 'testdbk3s';    # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)

mysql> create user k3s@'10.2.2.13' identified by 'testdbk3s';    # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)

mysql> create user k3s@'10.2.2.14' identified by 'testdbk3s';    # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)

mysql> create user k3s@'10.2.2.15' identified by 'testdbk3s';    # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)

mysql> create user k3s@'10.2.2.16' identified by 'testdbk3s';    # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)

mysql> grant all on k3s.* to k3s@'10.2.2.10';    # ---- > 授权
Query OK, 0 rows affected (0.01 sec)

mysql> grant all on k3s.* to k3s@'10.2.2.11';    # ---- > 授权
Query OK, 0 rows affected (0.00 sec)

mysql> grant all on k3s.* to k3s@'10.2.2.12';    # ---- > 授权
Query OK, 0 rows affected (0.00 sec)

mysql> grant all on k3s.* to k3s@'10.2.2.13';    # ---- > 授权
Query OK, 0 rows affected (0.00 sec)

mysql> grant all on k3s.* to k3s@'10.2.2.14';    # ---- > 授权
Query OK, 0 rows affected (0.00 sec)

mysql> grant all on k3s.* to k3s@'10.2.2.15';    # ---- > 授权
Query OK, 0 rows affected (0.00 sec)

mysql> grant all on k3s.* to k3s@'10.2.2.16';    # ---- > 授权
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;      # ---- > 刷新权限
Query OK, 0 rows affected (0.00 sec

六、安装k3s server节点

对于server-1 10.2.2.13节点

1. 下载k3s离线安装文件
[demo@2t13 k3s]$ pwd
/home/demo/k3s
[demo@2t13 k3s]$ ls -l
total 462584
-rw-rw-r-- 1 demo demo     26929 Oct 16 00:57 install.sh
-rw-rw-r-- 1 demo demo  56553472 Oct 16 00:57 k3s
-rw-rw-r-- 1 demo demo 417101824 Oct 16 00:57 k3s-airgap-images-amd64.tar
说明(下面是本文档用到的程序版本): 
install.sh 脚本内容地址:https://get.k3s.io/
k3s 是k3s主程序。下载地址:https://github.com/k3s-io/k3s/releases/tag/v1.19.15+k3s2
k3s-airgap-images-amd64.tar 是k3s用到的镜像。  下载地址:https://github.com/k3s-io/k3s/releases/tag/v1.19.15+k3s2

2. docker导入k3s-airgap-images-amd64.tar镜像
[demo@2t13 k3s]$ sudo docker load -i k3s-airgap-images-amd64.tar

3. 给k3s执行权限并复制到指定目录
[demo@2t13 k3s]$ chmod +x k3s && sudo cp k3s /usr/local/bin/

4. 执行安装
# 将下面两行加入到k3s的安装脚本中,加到最上面
[demo@2t13 k3s]$ vim install.sh
... ...
export INSTALL_K3S_SKIP_DOWNLOAD=true
export INSTALL_K3S_EXEC="server --datastore-endpoint=mysql://k3s:testdbk3s@tcp(10.2.2.10:13306)/k3s --docker --node-taint CriticalAddonsOnly=true:NoExecute --tls-san 10.2.2.100 --kube-apiserver-arg service-node-port-range=10000-65000 --no-deploy traefik --write-kubeconfig ~/.kube/config --write-kubeconfig-mode 666"
... ...

注意: --tls-san参数后面的地址是为集群提供SLB的地址,对应为下文中keepalived的虚拟VIP地址

# 执行脚本
[demo@2t13 k3s]$ sudo ./install.sh 
[INFO]  Skipping k3s download and verify
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink from /etc/systemd/system/multi-user.target.wants/k3s.service to /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

5. 从root用户家目录拷贝kubeconfig文件到当前用户家目录
[demo@2t13 k3s]$ sudo cp -ar /root/.kube/config /home/demo/

6. 查看节点
[demo@2t13 k3s]$ kubectl get node
NAME                STATUS   ROLES    AGE     VERSION
2t13.cluster.ydca   Ready    master   2m48s   v1.19.15+k3s2

7. 查看token
[demo@2t13 k3s]$ sudo cat /var/lib/rancher/k3s/server/node-token
K10e1b1fcb4caf1f726580e0fb22d15ff4fcb48e5a26c0841b4c63b8176169a66f2::server:447912a715c422f1cce5893c37572280

对于server-2 10.2.2.14节点

该节点与server-1操作只有1处不同,如下
在server-1的第4步骤中,我们向install.sh 脚本中添加了两个环境变量
该变量在server-2节点应该改为如下:
export INSTALL_K3S_SKIP_DOWNLOAD=true
export INSTALL_K3S_EXEC="server --token K10e1b1fcb4caf1f726580e0fb22d15ff4fcb48e5a26c0841b4c63b8176169a66f2::server:447912a715c422f1cce5893c37572280 --datastore-endpoint=mysql://k3s:testdbk3s@tcp(10.2.2.10:13306)/k3s --docker --node-taint CriticalAddonsOnly=true:NoExecute --tls-san 10.2.2.100 --kube-apiserver-arg service-node-port-range=10000-65000 --no-deploy traefik --write-kubeconfig ~/.kube/config --write-kubeconfig-mode 666"

说明:相比server-1中,只是增加了--token 选项,该选项的数据来自server-1中的第7步骤。

两个server节点部署完成后,查看当前集群节点状况

[demo@2t13 k3s]$ kubectl get node
NAME                STATUS   ROLES    AGE   VERSION
2t13.cluster.ydca   Ready    master   18m   v1.19.15+k3s2
2t14.cluster.ydca   Ready    master   12s   v1.19.15+k3s2

七、使用 haproxy+keepalived 实现 server 节点负载均衡及高可用

1、haproxy的部署和配置
部署 haproxy-2.4.7
部署方式:二进制
执行范围:10.2.2.11 10.2.2.12 (两台服务器操作完全一致)

1. 下载二进制包
下载地址:http://www.haproxy.org/

2. 安装gcc
[demo@2t11 haproxy]$ sudo yum install gcc -y

2. 上传二进制包到服务器并解压
[demo@2t11 haproxy]$ tar -xvf haproxy-2.4.7.tar.gz  #------------------->解压
[demo@2t11 haproxy]$ ls -lth  #------------------->查看
total 3.5M
-rw-rw-r--  1 demo demo 3.5M Oct 16 21:37 haproxy-2.4.7.tar.gz
drwxrwxr-x 13 demo demo 4.0K Oct  4 20:56 haproxy-2.4.7

3. 查看系统参数
[demo@2t11 haproxy]$ uname -a
Linux 2t11.cluster 4.4.246-1.el7.elrepo.x86_64 #1 SMP Tue Nov 24 09:26:59 EST 2020 x86_64 x86_64 x86_64 GNU/Linux

4. 安装
[demo@2t11 haproxy]$ cd haproxy-2.4.7  #------------------->进入解压后目录
[demo@2t11 haproxy-2.4.7]$ sudo make TARGET=linux-glibc ARCH=x86_64 PREFIX=/usr/local/haproxy  #------------------->编译
[demo@2t11 haproxy-2.4.7]$ sudo make install PREFIX=/usr/local/haproxy  #------------------->安装

5. haproxy配置文件
[demo@2t11 haproxy]$ sudo mkdir /usr/local/haproxy/cfg #------------------->创建配置文件目录
[demo@2t11 haproxy]$ cat /usr/local/haproxy/cfg/haproxy.cfg #------------------->配置文件内容
global
  daemon
  maxconn 4000
  pidfile /usr/local/haproxy/haproxy.pid

defaults
  log global
  option  httplog
  option  dontlognull
  timeout connect 5000
  timeout client 50000
  timeout server 50000

listen admin_stats  #------------>开启监控页面UI配置段
  stats   enable
  bind    *:18090
  mode    http
  option  httplog
  log     global
  maxconn 10
  stats   refresh 5s
  stats   uri /admin #------------>访问URI配置
  stats   realm haproxy
  stats   auth admin:HaproxyProd1212!@2021 #------------>登录名及密码配置
  stats   hide-version
  stats   admin if TRUE

frontend k3s-apiserver #------------>定义代理入口配置段
  bind *:6443 #------------>定义代理端口
  mode tcp
  option tcplog
  default_backend k3s-apiserver #------------>定义代理后端

backend k3s-apiserver #------------>定义代理后端配置段
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 #------------>定义负载均衡规则
  server k3s-apiserver-13 10.2.2.13:6443 check #------------>定义负载目标
  server k3s-apiserver-14 10.2.2.14:6443 check #------------>定义负载目标

6. 启动脚本(注意脚本要有执行权限)
[demo@2t11 haproxy]$ cat /usr/lib/systemd/system/haproxy.service  #------------------->启动脚本内容
[Unit]
Description=HAProxy
After=network.target

[Service]
User=root
Type=forking
ExecStart=/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/cfg/haproxy.cfg
ExecStop=/usr/bin/kill `/usr/bin/cat /usr/local/haproxy/haproxy.pid`

[Install]
WantedBy=multi-user.target

7. 启动haproxy
[demo@2t12 haproxy-2.4.7]$ sudo systemctl daemon-reload
[demo@2t12 haproxy-2.4.7]$ sudo systemctl start haproxy
[demo@2t12 haproxy-2.4.7]$ sudo systemctl enable haproxy

8. 查看
[demo@2t11 haproxy]$ sudo netstat -tnlp|grep haproxy
tcp        0      0 0.0.0.0:18090           0.0.0.0:*               LISTEN      9340/haproxy        
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      9340/haproxy        

访问haproxy-UI http://10.2.2.11:18090/admin 监控页面

监控页面

2、 keepalived的部署和配置
部署 keepalived-2.1.5
部署方式:二进制
执行范围:10.2.2.11 10.2.2.12 (两台服务器keepalived配置文件有所差异,下文会标明)

1. 下载二进制包
https://www.keepalived.org/download.html

2. 上传到服务器并解压
[demo@2t11 keepalived]$ tar -xvf keepalived-2.1.5.tar.gz

3. 安装依赖
[demo@2t11 keepalived]$ sudo yum install curl gcc openssl-devel libnl3-devel net-snmp-devel -y

4. 配置
 [demo@2t11 keepalived]$ cd keepalived-2.1.5 #------------------->进入解压后目录
 [demo@2t11 keepalived]$ sudo ./configure --prefix=/usr/local/keepalived/ --sysconfdir=/etc #------------------->配置

5. 编译、安装
 [demo@2t11 keepalived]$ sudo make && sudo make install

6. 查看安装目录,并将相关文件复制到指定位置
[demo@2t11 keepalived-2.1.5]$ ls /usr/local/keepalived/
bin  etc  sbin  share
[demo@2t11 keepalived-2.1.5]$ pwd   #------------------->当前所在目录为源码包解压后的目录(编译安装时的目录)
/home/demo/keepalived/keepalived-2.1.5
[demo@2t11 keepalived-2.1.5]$ sudo cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
[demo@2t11 keepalived-2.1.5]$ sudo cp /usr/local/keepalived/bin/genhash /usr/sbin/
[demo@2t11 keepalived-2.1.5]$ sudo cp keepalived/keepalived.service /usr/lib/systemd/system/
[demo@2t11 keepalived-2.1.5]$ sudo cp keepalived/etc/init.d/keepalived.rh.init /etc/sysconfig/keepalived.sysconfig

7. 编写配置文件
# 对于10.2.2.11
[demo@2t11 keepalived]$ cd /etc/keepalived #------------>进入配置文件目录
[demo@2t11 keepalived]$ sudo mv keepalived.conf keepalived.conf.bak #------------>备份默认配置文件,使用下面的配置
[demo@2t11 keepalived]$ cat keepalived.conf #------------>配置文件
global_defs {
   notification_email {
       mail@lizhip.cn # 指定keepalived在发生切换时需要发送email到的对象,一行一个
   }
   notification_email_from 2691905373@qq.com # 指定发件人 
   smtp_server smtp.qq.com  # smtp 服务器地址
   smtp_connect_timeout 30  # smtp 服务器连接超时时间
   router_id 2t11.cluster  # 标识本节点的字符串,通常为hostname,但不一定非得是hostname,故障发生时,邮件通知会用到
   script_user root
   enable_script_security
}

vrrp_script check_haproxy {
   script /etc/keepalived/check_haproxy.sh # haproxy状态监控脚本
   interval 3
} 

vrrp_instance VI_1 {
    state BACKUP # 节点角色,下面配置了不争抢模式,故设置为BACKUP
    nopreempt # 不争抢模式
    interface ens33 # 节点固有IP(非VIP)的网卡,用来发VRRP包做心跳检测
    virtual_router_id 62 # 虚拟路由ID,取值在0-255之间,用来区分多个instance的VRRP组播,同一网段内ID不能重复;主备必须为一样;
    priority 100 # 用来选举master的,要成为master那么这个选项的值最好高于其他机器50个点,该项取值范围是1-255(在此范围之外会被识别成默认值100)
    advert_int 1 # 检查间隔默认为1秒,即1秒进行一次master选举(可以认为是健康查检时间间隔)
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.2.2.100 # 虚拟VIP地址,允许多个
    }
    track_script {
        check_haproxy
    }
}

# 对于10.2.2.12
global_defs {
   notification_email {
       mail@lizhip.cn
   }
   notification_email_from 2691905373@qq.com
   smtp_server smtp.qq.com
   smtp_connect_timeout 30
   router_id 2t11.cluster
   script_user root
   enable_script_security
}

vrrp_script check_haproxy {
   script /etc/keepalived/check_haproxy.sh
   interval 3
} 

vrrp_instance VI_1 {
    state BACKUP
    nopreempt
    interface ens33
    virtual_router_id 62
    priority 99  # -----------------------> 此处与10.2.2.11不一样
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.2.2.100
    }
    track_script {
        check_haproxy
    }
}

8. check_haproxy脚本(注意脚本要有执行权限)
[demo@2t12 keepalived-2.1.5]$ cat /etc/keepalived/check_haproxy.sh
#!/bin/bash
haproxy_status=`/usr/sbin/pidof haproxy|wc -l`
if [ $haproxy_status -lt 1 ];then
     systemctl stop keepalived
fi

9. 启停管理
[demo@2t12 keepalived-2.1.5]$ sudo systemctl daemon-reload
[demo@2t12 keepalived-2.1.5]$ sudo systemctl start keepalived
[demo@2t12 keepalived-2.1.5]$ sudo systemctl stop keepalived
[demo@2t12 keepalived-2.1.5]$ sudo systemctl enable keepalived

3、通过keepalived虚拟VIP http://10.2.2.100:18090/admin 访问haproxy监控页面

image.png
4、检测keepalived+haproxy高可用
检测方法:
第一步:停止10.2.2.11和10.2.2.12的haproxy和keepalived
第二步:启动10.2.2.11和10.2.2.12的haproxy
第三步:启动10.2.2.11和10.2.2.12的keepalived,查看keepalived虚拟VIP
第四步:停止绑定了虚拟VIP节点(10.2.2.11)的haproxy,查看VIP是否漂移到另一节点
第五步:再次启动之前停止了haproxy和keepalived节点的haproxy和keepalived程序,并停止另一节点的haproxy,查看VIP是否漂移回本节点
第六步:验证无误,说明keepalived+haproxy高可用节点已经部署完毕,可以为k3s集群提供高可用服务。

八、安装k3s agent节点

执行范围: 10.2.2.15 10.2.2.16
登录10.2.2.13(即上述server-1节点),拷贝k3s的3个文件到10.2.2.15和10.2.2.16两台主机上

[demo@2t13 k3s]$ cd /home/demo/k3s/ # ---> 进入k3s文件目录
[demo@2t13 k3s]$ ls # ---> 查看
install.sh  k3s  k3s-airgap-images-amd64.tar # ---> 就是这3个文件
[demo@2t13 k3s]$ scp ./* 10.2.2.15:/home/demo/k3s/ # ---> 拷贝到10.2.2.15
[demo@2t13 k3s]$ scp ./* 10.2.2.16:/home/demo/k3s/ # ---> 拷贝到10.2.2.16

修改install.sh文件,如下(10.2.2.15和10.2.2.16改动都一样)

[demo@2t15 k3s]$  vim install.sh
... ...
export INSTALL_K3S_SKIP_DOWNLOAD=true
export K3S_TOKEN=K10e1b1fcb4caf1f726580e0fb22d15ff4fcb48e5a26c0841b4c63b8176169a66f2::server:447912a715c422f1cce5893c37572280
export K3S_URL=https://10.2.2.100:6443
export INSTALL_K3S_EXEC="agent --datastore-endpoint=mysql://k3s:testdbk3s@tcp(10.2.2.10:13306)/k3s --docker --kube-apiserver-arg service-node-port-range=10000-65000 --no-deploy traefik --write-kubeconfig ~/.kube/config --write-kubeconfig-mode 666"
... ...

安装

1. 给k3s执行权限并复制到指定目录
[demo@2t15 k3s]$ chmod +x k3s && sudo cp k3s /usr/local/bin/

2. 执行
[demo@2t15 k3s]$ sudo ./install.sh 
[sudo] password for demo: 
[INFO]  Skipping k3s download and verify
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink from /etc/systemd/system/multi-user.target.wants/k3s-agent.service to /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent

登录10.2.2.13查看集群节点

[demo@2t13 k3s]$ kubectl get node
NAME                STATUS   ROLES    AGE   VERSION
2t16.cluster   Ready    <none>   15m   v1.19.15+k3s2
2t14.cluster   Ready    master   33m   v1.19.15+k3s2
2t13.cluster   Ready    master   77m   v1.19.15+k3s2
2t15.cluster   Ready    <none>   16m   v1.19.15+k3s2

九、安装rancher-ui界面
操作范围:10.2.2.10

[demo@2t10 ~]$ sudo docker run --privileged -d -v /home/demo/rancherUI-data/:/var/lib/rancher --restart=unless-stopped --name rancher -p 80:80 -p 9443:443 rancher/rancher:v2.4.17
c93d4d3f1a273cb693d6caf3f515d88797172a81f36a3acf5ce2f75138e46e9e

访问


image.png
image.png
image.png
image.png

继续按下图所示导入k3s集群到rancher


image.png
image.png
image.png
复制下图中红框部分到10.2.2.13或10.2.2.14节点并执行
image.png
image.png
image.png
image.png
上一篇下一篇

猜你喜欢

热点阅读