二进制安装-k8s高可用集群14-部署kube-proxy组件

2021-09-01  本文已影响0人  Chris0Yang

kube-proxy 运行在所有 worker 节点上,,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。

本文档讲解部署 kube-proxy 的部署,使用 ipvs 模式。

1、创建 kube-proxy 证书

创建证书签名请求:

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}
EOF

生成证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

2、创建和分发 kubeconfig 文件

source /opt/k8s/bin/environment.sh

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

分发 kubeconfig 文件:

cat > magic62_distribute_kubeconfig_file.sh << "EOF"
#!/bin/bash
# 分发 kubeconfig 文件
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
do
    echo ">>> ${node_name}"
    scp kube-proxy.kubeconfig k8s@${node_name}:/etc/kubernetes/
done
EOF

3、创建 kube-proxy 配置文件

从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用 –write-config-to 选项生成该配置文件,或者参考 kubeproxyconfig 的类型定义源文件 :https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/apis/kubeproxyconfig/types.go

创建 kube-proxy config 文件模板:

cat >kube-proxy.config.yaml.template <<EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: ##NODE_IP##
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: ${CLUSTER_CIDR}
healthzBindAddress: ##NODE_IP##:10256
hostnameOverride: ##NODE_NAME##
kind: KubeProxyConfiguration
metricsBindAddress: ##NODE_IP##:10249
mode: "ipvs"
EOF

为各节点创建和分发 kube-proxy 配置文件:

cat > magic63_createNode_distribute_kube-proxy_file.sh << "EOF"
#!/bin/bash
# 为各节点创建和分发 kube-proxy 配置文件
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
do 
    echo ">>> ${NODE_NAMES[i]}"
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy.config.yaml.template > kube-proxy-${NODE_NAMES[i]}.config.yaml
    scp kube-proxy-${NODE_NAMES[i]}.config.yaml root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy.config.yaml
done
EOF

5、创建和分发 kube-proxy systemd unit 文件

source /opt/k8s/bin/environment.sh

cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy.config.yaml \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

分发 kube-proxy systemd unit 文件:

cat > magic64_distribute_kube-proxy_unitFile.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
do 
    echo ">>> ${node_name}"
    scp kube-proxy.service root@${node_name}:/etc/systemd/system/
done
EOF

5、启动 kube-proxy 服务

cat > magic65_start_kube-proxy_server.sh << "EOF"
#!/bin/bash
# 启动 kube-proxy 服务
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
    echo ">>> ${node_ip}" 
    ssh root@${node_ip} "mkdir -p /var/lib/kube-proxy"
    ssh root@${node_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl start kube-proxy"
done
EOF

6、检查启动结果

cat > magic66_ckeck_server.sh << "EOF"
#!/bin/bash
# 检查启动结果
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
    echo ">>> ${node_ip}" 
    ssh k8s@${node_ip} "systemctl status kube-proxy|grep Active"
done
EOF

如果输出如下:

bash magic66_check_server_run_status.sh
>>> 172.68.96.101
   Active: active (running) since Wed 20XX-XX-XX XX:XX:XX CST; XXh ago
>>> 172.68.96.102
   Active: active (running) since Wed 20XX-XX-XX XX:XX:XX CST; XXh ago
>>> 172.68.96.103
   Active: active (running) since Wed 20XX-XX-XX XX:XX:XX CST; XXh ago

则正常,如果启动失败,检查日志:

journalctl -xu kube-proxy

7、查看监听端口和 metrics

[k8s@master abc]# sudo netstat -lnpt|grep kube-prox
tcp        0      0 172.68.96.101:10256     0.0.0.0:*               LISTEN      19061/kube-proxy
tcp        0      0 172.68.96.101:10249     0.0.0.0:*               LISTEN      19061/kube-proxy

8、查看 ipvs 路由规则

cat > magic67_check_IP_rule.sh << "EOF"
#!/bin/bash
# 查看 ipvs 路由规则
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
    echo ">>> ${node_ip}" 
    ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
done
EOF

输出:

bash magic67_check_IP_rule.sh
>>> 172.68.96.101
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr persistent 10800
  -> 172.68.96.101:6443         Masq    1      0          0
  -> 172.68.96.102:6443         Masq    1      0          0
  -> 172.68.96.103:6443         Masq    1      0          0

>>> 172.68.96.102
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr persistent 10800
  -> 172.68.96.101:6443         Masq    1      0          0
  -> 172.68.96.102:6443         Masq    1      0          0
  -> 172.68.96.103:6443         Masq    1      0          0

>>> 172.68.96.103
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr persistent 10800
  -> 172.68.96.101:6443         Masq    1      0          0
  -> 172.68.96.102:6443         Masq    1      0          0
  -> 172.68.96.103:6443         Masq    1      0          0

可见将所有到 kubernetes cluster ip 443 端口的请求都转发到了 kube-apiserver 的 6443 端口。

上一篇 下一篇

猜你喜欢

热点阅读