K8s

二进制搭建K8S集群-2.[多Master高可用]

2021-06-03  本文已影响0人  阿当运维

上篇是单master多node架构 ,现在在扩容一台master2 (1.252)形成多master多node高可用架构

1.初始化系统配置

我这里还是用master1上用ansible写的playbook远程弄了一下,略。
master2 的工作目录也是/root/k8s

2.创建Master2 node

1.新机器上创建etcd证书目录

mkdir -p /root/k8s/etcd/ssl
  1. 将master1上的k8s所有文件拷贝到master2
cd /root/k8s
scp -r kubernetes root@192.168.1.252:/root/k8s/
scp -r cni/ root@192.168.1.252:/root/k8s/
scp -r etcd/ssl root@192.168.1.252:/root/k8s/etcd/
scp /usr/lib/systemd/system/kube* root@192.168.1.252:/usr/lib/systemd/system/
scp /usr/bin/kubectl  root@192.168.1.252:/usr/bin

3.删除证书文件(master2)

rm -f /root/k8s/kubernetes/cfg/kubelet.kubeconfig
rm -f /root/k8s/kubernetes/ssl/kubelet*

4.修改配置文件IP和主机名
确保主机名是k8s-master2 (ansible那应该已经改了,bash刷新一下)
修改apiserver、kubelet和kube-proxy配置文件为本地IP:

vi /root/k8s/kubernetes/cfg/kube-apiserver.conf
...
--bind-address=192.168.1.252 \
--advertise-address=192.168.1.252 \
...

vi /root/k8s/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master2

vi /root/k8s/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master2

5.启动服务
注意: 提前看一下kubelet和docker cgroup driver 是否是一直的。我这里docker原本是cgroups,kubelet-config.yml配置的是systemd,导致服务没起来。修改docker即可。

[root@k8s-master2 cfg]# cat  /etc/docker/daemon.json
{
  "registry-mirrors": ["https://registry.docker-cn.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl daemon-reload
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl enable kubelet
systemctl enable kube-proxy
image.png

6.批准kubelet证书申请

kubectl get csr
kubectl certificate approve  node-csr-9VHjmOdZ8LfpsCM0U56nWnCRX2Wb8Uae2Egvjwbp6d4
image.png

部署Nginx+Keepalive实现高可用

架构图:


image.png
原理:

用nginx反向代理后面的master apiserver,达到负载均衡。只有一台nginx这样会单点故障,在多台nginx上安装keepalived软件做高可用。

用户访问VIP(此时vip绑定在一台优先级高的主nginx),再由nginx把请求转发到后端apiserver。
如果nginx挂掉,keepalived根据健康检查机制会自动把VIP转移到备Nginx上,继续工作。

步骤:

这里为了节省机器,高可用角色全部由k8s master机器复用。

  1. master1和master2上执行
 yum install epel-release -y
 yum install nginx keepalived -y
2.Nginx配置文件(主、备一样)

先备份原有nginx.conf

cp /etc/nginx/nginx.conf  /etc/nginx/nginx.conf.bak
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.1.222:6443;   # Master1 APISERVER IP:PORT
       server 192.168.1.252:6443;   # Master2 APISERVER IP:PORT
    }
    
    server {
       listen 16643;  #因为是复用,所以换一个不和6443冲突的端口
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}
3. keepalived配置文件(Nginx Master) VIP --192.168.1.188
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虚拟IP
    virtual_ipaddress {
        192.168.1.188/24   ###VIP 地址
    }
    track_script {
        check_nginx
    }
}

Nginx Backup 复制上面的,只修改:
角色state BACKUP
优先级priority 90
监听网卡看实际修改。

两个Nginx分别添加check_nginx脚本
cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    systemctl stop keepalived   #结果为0条,代表nginx挂掉,马上停掉keepalived服务
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
测试高可用

设置完发现网卡已经绑定VIP了


image.png

现在我们把nginx master上的nginx停掉。pkill nginx
备机完成切换


image.png
修改所有Worker Node连接 负载均衡的VIP

现在我们要之前worker node连接master -apiserver的地址改成VIP的,让所有worker node都连接VIP走负载均衡器,不然还是master单点故障

在worker节点执行

sed -i  's#192.168.1.222:6443#192.168.1.188:16443#' /root/k8s/kubernetes/cfg/*
systemctl restart kubelet
systemctl restart kube-proxy
测试负载均衡器测试

集群任意一个节点,使用k8s版本测试,用VIP访问
curl -k https://192.168.1.188:16443/version

image.png
检查节点状态:
image.png

二进制搭建k8s高可用集群到此结束。
如果是公有云上,一般都不支持keepalived,公有云上都会禁掉这种组播协议,所以可以直接用公有云的LB产品,架构一样,都是直接负载多台master kube-apiserver

上一篇下一篇

猜你喜欢

热点阅读