kubernetes v1.10.4(二进制)安装部署文档

2018-07-07  本文已影响0人  Horne

一、组件版本和配置策略

1、组件版本

Kubernetes 1.10.4
Docker 18.03.1-ce
Etcd 3.3.7
Flanneld 0.10.0
插件:
Coredns
Dashboard
Heapster (influxdb、grafana)
Metrics-Server
EFK (elasticsearch、fluentd、kibana)
镜像仓库:
docker registry
harbor

2、主要配置策略

2.1、kube-apiserver:

关闭非安全端口 8080 和匿名访问;
在安全端口 6443 接收 https 请求;
严格的认证和授权策略 (x509、token、RBAC);
开启 bootstrap token 认证,支持 kubelet TLS bootstrapping;
使用 https 访问 kubelet、etcd,加密通信;

2.2、kube-controller-manager:

3节点高可用;
关闭非安全端口,在安全端口 10252 接收 https 请求;
使用 kubeconfig 访问 apiserver 的安全端口;
自动 approve kubelet 证书签名请求 (CSR),证书过期后自动轮转;
各 controller 使用自己的 ServiceAccount 访问 apiserver;

2.3、kube-scheduler:

3节点高可用;
使用 kubeconfig 访问 apiserver 的安全端口;

2.4、kubelet:

使用 kubeadm 动态创建 bootstrap token,而不是在 apiserver 中静态配置;
使用 TLS bootstrap 机制自动生成 client 和 server 证书,过期后自动轮转;
在 KubeletConfiguration 类型的 JSON 文件配置主要参数;
关闭只读端口,在安全端口 10250 接收 https 请求,对请求进行认证和授权,拒绝匿名访问和非授权访问;
使用 kubeconfig 访问 apiserver 的安全端口;

2.5、kube-proxy:

使用 kubeconfig 访问 apiserver 的安全端口;
在 KubeProxyConfiguration 类型的 JSON 文件配置主要参数;
使用 ipvs 代理模式;

2.6、集群插件:

DNS:使用功能、性能更好的 coredns;
Dashboard:支持登录认证;
Metric:heapster、metrics-server,使用 https 访问 kubelet 安全端口;
Log:Elasticsearch、Fluend、Kibana;
Registry 镜像库:docker-registry、harbor;

二、系统初始化和全局变量

## 1、集群机器(本文档中的etcd集群、master节点、worker节点均使用这三台机器)
主机名称        IP              cpu     mem     备注
k8s-01m        192.168.10.51    1C      1G      etcd keepalived master kubectl
k8s-02m        192.168.10.52    1C      1G      etcd keepalived master node docker
k8s-03m        192.168.10.53    1C      1G      etcd keepalived master node docker flannel
VIP            192.168.10.50                    vip

## 2、主机名(修改每台机器的/etc/hosts 文件,添加主机名和IP的对应关系)
cat <<EOF > /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.51 k8s-01m k8s-01m.test.com
192.168.10.52 k8s-02m k8s-02m.test.com
192.168.10.53 k8s-03m k8s-03m.test.com
EOF
hostnamectl set-hostname k8s-01m.test.com
hostnamectl set-hostname k8s-02m.test.com
hostnamectl set-hostname k8s-03m.test.com

## 3、添加 k8s 和 docker 账户(在每台机器上添加 k8s 账户,可以无密码 sudo)
sudo useradd -u 1000 -d /usr/local/k8s k8s && echo '123456' | passwd --stdin k8s
#把k8s用户加到whell组内
sudo gpasswd -a k8s wheel

## 4、无密码 ssh 登录其它节点
* 如果没有特殊指明,本文档的所有操作均在 k8s-01m 节点上执行,然后远程分发文件和执行命令。
* 设置 k8s-01m 可以无密码登录所有节点的 k8s 和 root 账户:
su - k8s
ssh-keygen -t rsa
ssh-copy-id root@k8s-01m
ssh-copy-id root@k8s-02m
ssh-copy-id root@k8s-03m

ssh-copy-id k8s@k8s-01m
ssh-copy-id k8s@k8s-02m
ssh-copy-id k8s@k8s-03m

## 5、关闭防火墙
* 在每台机器上关闭防火墙:
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo iptables -F && sudo iptables -X && sudo iptables -F -t nat && sudo iptables -X -t nat
sudo sudo iptables -P FORWARD ACCEPT
#关闭Selinux
setenforce  0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config

## 7、设置系统参数
cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF
sudo sysctl -p /etc/sysctl.d/kubernetes.conf
sudo modprobe br_netfilter

* 切换到 root 账号执行:
for intf in /sys/devices/virtual/net/docker0/brif/*; do echo 1 > $intf/hairpin_mode; done

## 8、将可执行文件路径 /usr/local/k8s/bin 添加到 PATH 变量中
cat <<"EOF" >/etc/profile.d/kubernetes.sh
#kubernetes library
export K8S_HOME=/usr/local/k8s
export PATH=$PATH:$K8S_HOME/bin
EOF
source /etc/profile.d/kubernetes.sh

## 9、创建目录
* 在每台机器上创建目录:
sudo mkdir -p /usr/local/k8s/{bin,cert,conf,yaml,logs}
sudo mkdir -p /etc/etcd/cert
sudo mkdir -p /var/lib/etcd
sudo chown -R k8s:k8s /usr/local/k8s
sudo chown -R k8s:k8s /etc/etcd/cert
sudo chown -R k8s:k8s /var/lib/etcd

## 10、集群环境变量脚本
* 后续的部署步骤将使用下面定义的全局环境变量,请根据自己的机器、网络情况修改:
* 打包后的变量定义见 environment.sh,后续部署时会提示导入该脚本;
cat > /usr/local/k8s/bin/environment.sh <<EOF
#!/usr/bin/bash
# 生成 EncryptionConfig 所需的加密 key
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

# 最好使用 当前未用的网段 来定义服务网段和 Pod 网段
# 服务网段,部署前路由不可达,部署后集群内路由可达(kube-proxy 和 ipvs 保证)
SERVICE_CIDR="10.254.0.0/16"

# Pod 网段,建议 /16 段地址,部署前路由不可达,部署后集群内路由可达(flanneld 保证)
CLUSTER_CIDR="172.30.0.0/16"

# 服务端口范围 (NodePort Range)
export NODE_PORT_RANGE="20000-40000"

# 集群各机器MASTER节点 IP 数组
export MASTER_IPS=(192.168.10.51 192.168.10.52 192.168.10.53)

# 集群各机器MASTER节点 IP 对应的 主机名数组
export MASTER_NAMES=(k8s-01m k8s-02m k8s-03m)

# 集群各机器WORKER节点 IP 数组
export WORKER_IPS=(192.168.10.51 192.168.10.52 192.168.10.53)

# 集群各机器WORKER节点 IP 对应的 主机名数组
export WORKER_NAMES=(k8s-01m k8s-02m k8s-03m)

# kube-apiserver 节点 VIP
export MASTER_VIP=192.168.10.50

# kube-apiserver https 地址
export KUBE_APISERVER="https://${MASTER_VIP}:6443"

# etcd 集群服务地址列表
export ETCD_ENDPOINTS="https://192.168.10.51:2379,https://192.168.10.52:2379,https://192.168.10.53:2379"

# etcd 集群间通信的 IP 和端口
export ETCD_NODES="k8s-01m=https://192.168.10.51:2380,k8s-02m=https://192.168.10.52:2380,k8s-03m=https://192.168.10.53:2380"

# flanneld 网络配置前缀
export FLANNEL_ETCD_PREFIX="/kubernetes/network"

# 节点间互联的网络接口名称
export IFACE=eth0

# kubernetes 服务 IP (一般是 SERVICE_CIDR 中第一个IP)
export CLUSTER_KUBERNETES_SVC_IP="10.254.0.1"

# 集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)
export CLUSTER_DNS_SVC_IP="10.254.0.2"

# 集群 DNS 域名,go v1.9中的域名语法校验解析bug,后缀不能用点
export CLUSTER_DNS_DOMAIN="cluster.local"

# 将二进制目录 /usr/local/k8s/bin 加到 PATH 中
export PATH=/usr/local/k8s/bin:$PATH

# 将二进制目录 /usr/local/cfssl/bin 加到 PATH 中
export PATH=/usr/local/cfssl/bin:$PATH

# KEEPALIVED集群各机器配置
export KEEPALIVED_STATE=(MASTER BACKUP BACKUP)
export KEEPALIVED_PRI=(100 90 80)
EOF

12、分发集群环境变量定义脚本

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /usr/local/k8s/bin/environment.sh k8s@${master_ip}:/usr/local/k8s/bin/
    ssh k8s@${master_ip} "chmod +x /usr/local/k8s/bin/*"
done

三、创建 CA 证书和秘钥

1、安装 cfssl 工具集

sudo mkdir -p /usr/local/cfssl/{bin,cert}
sudo wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/cfssl/bin/cfssl
sudo wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/cfssl/bin/cfssljson
sudo wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/cfssl/bin/cfssl-certinfo
sudo chmod +x /usr/local/cfssl/bin/*
sudo chown -R k8s /usr/local/cfssl
export PATH=/usr/local/cfssl/bin:$PATH

2、创建用来生成根证书(CA)文件的JSON配置文件

cat > /usr/local/cfssl/cert/ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

3、创建证书签名请求文件(CSR)的JSON配置文件

cat > /usr/local/cfssl/cert/ca-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

4、生成 CA 证书和私钥

cd /usr/local/cfssl/cert/
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
* 查看CA证书
openssl x509 -noout -text -in /usr/local/cfssl/cert/ca.pem

5、分发 CA 证书文件

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /usr/local/cfssl/cert/{ca*.pem,ca-config.json} k8s@${master_ip}:/usr/local/k8s/cert
done

6、参考

四、部署 etcd 集群

1、下载和分发 etcd 二进制文件

* 下载最新发布包,到如下页面:https://github.com/coreos/etcd/releases
sudo wget https://github.com/coreos/etcd/releases/download/v3.3.7/etcd-v3.3.7-linux-amd64.tar.gz -P /usr/local/src
sudo tar -xvf /usr/local/src/etcd-v3.3.7-linux-amd64.tar.gz -C /usr/local/src
* 分发二进制文件到集群所有节点:
source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /usr/local/src/etcd-v3.3.7-linux-amd64/etcd* k8s@${master_ip}:/usr/local/k8s/bin
    ssh k8s@${master_ip} "chmod +x /usr/local/k8s/bin/*"
done

2、创建 etcd 证书和私钥

cat > /usr/local/cfssl/cert/etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "${MASTER_IPS[0]}",
    "${MASTER_IPS[1]}",
    "${MASTER_IPS[2]}"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
cd /usr/local/cfssl/cert/
cfssl gencert -ca=/usr/local/k8s/cert/ca.pem \
    -ca-key=/usr/local/k8s/cert/ca-key.pem \
    -config=/usr/local/k8s/cert/ca-config.json \
    -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

3、分发 etcd 证书文件

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p /etc/etcd/cert && chown -R k8s:k8s /etc/etcd/cert"
    scp /usr/local/cfssl/cert/etcd*.pem k8s@${master_ip}:/etc/etcd/cert/
done

4、创建 etcd 的 systemd unit 模板文件

source /usr/local/k8s/bin/environment.sh
sudo bash -c "cat > /lib/systemd/system/etcd.service" <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/k8s/bin/etcd \\
  --data-dir=/var/lib/etcd \\
  --name=${MASTER_NAMES[0]} \\
  --cert-file=/etc/etcd/cert/etcd.pem \\
  --key-file=/etc/etcd/cert/etcd-key.pem \\
  --trusted-ca-file=/usr/local/k8s/cert/ca.pem \\
  --peer-cert-file=/etc/etcd/cert/etcd.pem \\
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \\
  --peer-trusted-ca-file=/usr/local/k8s/cert/ca.pem \\
  --listen-peer-urls=https://${MASTER_IPS[0]}:2380 \\
  --initial-advertise-peer-urls=https://${MASTER_IPS[0]}:2380 \\
  --listen-client-urls=https://${MASTER_IPS[0]}:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls=https://${MASTER_IPS[0]}:2379 \\
  --initial-cluster-token=k8s-etcd-cluster \\
  --initial-cluster=${ETCD_NODES} \\
  --initial-cluster-state=new
Restart=on-failure
RestartSec=5
User=k8s
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

5、分发 etcd systemd unit 文件

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    master_name=`ssh root@${master_ip} "hostname -s"`
    ssh root@${master_ip} "mkdir -p /var/lib/etcd && chown -R k8s:k8s /var/lib/etcd"
    scp /lib/systemd/system/etcd.service root@${master_ip}:/lib/systemd/system/etcd.service
    ssh root@${master_ip} "sed -i 's@name=${MASTER_NAMES[0]}@name=${master_name}@g' /lib/systemd/system/etcd.service"
    ssh root@${master_ip} 'sed -i ''/cluster/!'s@${MASTER_IPS[0]}@${master_ip}@g' /lib/systemd/system/etcd.service'
done

6、启动 etcd 服务

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd"
done

7、检查启动结果

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh k8s@${master_ip} "systemctl status etcd"
done

8、验证服务状态

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
/usr/local/k8s/bin/etcdctl \
    --endpoints=https://${master_ip}:2379 \
    --ca-file=/usr/local/k8s/cert/ca.pem \
    --cert-file=/etc/etcd/cert/etcd.pem \
    --key-file=/etc/etcd/cert/etcd-key.pem cluster-health
done
source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ETCDCTL_API=3 /usr/local/k8s/bin/etcdctl \
    --endpoints=https://${master_ip}:2379 \
    --cacert=/usr/local/k8s/cert/ca.pem \
    --cert=/etc/etcd/cert/etcd.pem \
    --key=/etc/etcd/cert/etcd-key.pem endpoint health
done
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ETCDCTL_API=3 /usr/local/k8s/bin/etcdctl \
    --endpoints=https://${master_ip}:2379 \
    --cacert=/usr/local/k8s/cert/ca.pem \
    --cert=/etc/etcd/cert/etcd.pem \
    --key=/etc/etcd/cert/etcd-key.pem member list
done

五、部署 kubectl 命令行工具

1、下载和分发 kubectl 二进制文件

sudo wget https://dl.k8s.io/v1.10.4/kubernetes-client-linux-amd64.tar.gz -P /usr/local/src
sudo tar -xzvf /usr/local/src/kubernetes-client-linux-amd64.tar.gz -C /usr/local/src
source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /usr/local/src/kubernetes/client/bin/kubectl k8s@${master_ip}:/usr/local/k8s/bin/
    ssh k8s@${master_ip} "chmod +x /usr/local/k8s/bin/*"
done

2、创建 admin 证书和私钥

cat > /usr/local/cfssl/cert/admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
source /usr/local/k8s/bin/environment.sh
cd /usr/local/cfssl/cert/
cfssl gencert -ca=/usr/local/k8s/cert/ca.pem \
  -ca-key=/usr/local/k8s/cert/ca-key.pem \
  -config=/usr/local/k8s/cert/ca-config.json \
  -profile=kubernetes admin-csr.json | cfssljson -bare admin

3、分发 admin 证书文件

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /usr/local/cfssl/cert/admin*.pem k8s@${master_ip}:/usr/local/k8s/cert
done

4、创建 kubectl.kubeconfig 文件

source /usr/local/k8s/bin/environment.sh
* 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/usr/local/k8s/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \

* 设置客户端认证参数
kubectl confit-credentials admin \
  --client-certificate=/usr/local/k8s/cert/admin.pem \
  --client-key=/usr/local/k8s/cert/admin-key.pem \
  --embed-certs=true \

* 设置上下文参数
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \

* 设置默认上下文
kubectl config use-context kubernetes

* --certificate-authority:验证 kube-apiserver 证书的根证书
* --client-certificate、--client-key:刚生成的 admin 证书和私钥,连接 kube-apiserver 时使用
* --embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl.kubeconfig 文件中(不加时,写入的是证书文件路径)

* 查看生成的文件
cat ~/.kube/config

5、分发 kubectl.kubeconfig 文件

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh k8s@${master_ip} "mkdir -p ~/.kube"
    scp ~/.kube/config k8s@${master_ip}:~/.kube
    ssh root@${master_ip} "mkdir -p ~/.kube"
    scp ~/.kube/config root@${master_ip}:~/.kube
done

六、部署 keepalived 节点

1、安装配置三节点的keepalived用于提供vip

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "yum install keepalived -y"
    ssh root@${master_ip} "cp /etc/keepalived/keepalived.conf{,.bak}"
done

2、配置 keepalived

source /usral/k8s/bin/environment.sh
sudo bash -c "cat > /etc/keepalived/keepalived.conf" <<EOF
global_defs {
   notification_email {
   386639226@qq.com
   }
   notification_email_from 386639226@qq.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    script "/usr/bin/curl -k https://${MASTER_IPS[0]}:6443"
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state ${KEEPALIVED_STATE[0]}
    interface eth0
    virtual_router_id 50
    priority ${KEEPALIVED_PRI[0]}    #主节点权重最高 依次减少
    advert_int 1    #检查间隔,默认为1s
    nopreempt       #设置为不抢夺VIP
    #采用单播通信,避免同一个局域网中多个keepalived组之间的相互影响
    mcast_src_ip ${MASTER_IPS[0]}  #修改为本地IP
    unicast_peer {
        ${MASTER_IPS[1]}
        ${MASTER_IPS[2]}
    }
    authentication {
        auth_type PASS
        auth_pass sqP05dQgMSlzrxHj
    }

    virtual_ipaddress {
        ${MASTER_VIP}/24
    }
    track_script {
        CheckK8sMaster
    }

}
EOF

3、分发 keepalived 配置文件

source /usr/local/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ));do
    echo ">>> ${MASTER_IPS[i]}"
    scp /etc/keepalived/keepalived.conf root@${MASTER_IPS[i]}:/etc/keepalived
    ssh root@${MASTER_IPS[i]} "sed -i 's/${MASTER_IPS[0]}/${MASTER_IPS[i]}/g' /etc/keepalived/keepalived.conf"
    ssh root@${MASTER_IPS[i]} "sed -i '/unicast_peer/,/}/s/${MASTER_IPS[i]}/${MASTER_IPS[0]}/g' /etc/keepalived/keepalived.conf"
    ssh root@${MASTER_IPS[i]} "sed -i 's/${KEEPALIVED_STATE[0]}/${KEEPALIVED_STATE[i]}/g' /etc/keepalived/keepalived.conf"
    ssh root@${MASTER_IPS[i]} "sed -i 's/${KEEPALIVED_PRI[0]}/${KEEPALIVED_PRI[i]}/g' /etc/keepalived/keepalived.conf"
done

4、启动keepalived

for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable keepalived && systemctl restart keepalived"
done

5、检查启动结果

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh k8s@${master_ip} "systemctl status keepalived"
done

6、验证服务状态(此时由于apiserver还未启动,暂无法看到VIP)

for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh k8s@${master_ip} "/usr/sbin/ip a && ipvsadm -Ln && ipvsadm -lnc"
done

七、部署 master 节点

1、下载和分发最新版本的二进制文件

sudo wget https://dl.k8s.io/v1.10.4/kubernetes-server-linux-amd64.tar.gz -P /usr/local/src
sudo tar -xzvf /usr/local/src/kubernetes-server-linux-amd64.tar.gz -C /usr/local/src
sudo tar -xzvf /usr/local/src/kubernetes/kubernetes-src.tar.gz -C /usr/local/src/kubernetes/
source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /usr/local/src/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubeadm} k8s@${master_ip}:/usr/local/k8s/bin/
    ssh k8s@${master_ip} "chmod +x /usr/local/k8s/bin/*"
done

2、部署 kube-apiserver 组件

2.1、创建 kubernetes 证书和私钥

source /usr/local/k8s/bin/environment.sh
cat > /usr/local/cfssl/cert/kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "${MASTER_VIP}",
    "${MASTER_IPS[0]}",
    "${MASTER_IPS[1]}",
    "${MASTER_IPS[2]}",
    "${CLUSTER_KUBERNETES_SVC_IP}",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.${CLUSTER_DNS_DOMAIN}"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
source /usr/local/k8s/bin/environment.sh
cd /usr/local/cfssl/cert/
cfssl gencert -ca=/usr/local/k8s/cert/ca.pem \
  -ca-key=/usr/local/k8s/cert/ca-key.pem \
  -config=/usr/local/k8s/cert/ca-config.json \
  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

2.2、分发 kubernetes 证书文件

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p /usr/local/k8s/cert/ && chown -R k8s:k8s /usr/local/k8s/cert/"
    scp /usr/local/cfssl/cert/kubernetes*.pem k8s@${master_ip}:/usr/local/k8s/cert/
done

2.3、创建加密配置文件

source /usr/local/k8s/bin/environment.sh
cat > /usr/local/k8s/yaml/encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF
cat <<EOF >/usr/local/k8s/cert/bootstrap-token.csv
${ENCRYPTION_KEY},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
cat <<EOF >/usr/local/k8s/cert/basic-auth.csv
admin,admin,1
readonly,readonly,2
EOF
source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /usr/local/k8s/yaml/encryption-config.yaml k8s@${master_ip}:/usr/local/k8s/yaml/
    #scp /usr/local/k8s/cert/{bootstrap-token.csv,basic-auth.csv} k8s@${master_ip}:/usr/local/k8s/cert
done

2.4、创建 kube-apiserver systemd unit 文件

source /usr/local/k8s/bin/environment.sh
sudo bash -c "cat > /lib/systemd/system/kube-apiserver.service" <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/k8s/bin/kube-apiserver \\
  --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --anonymous-auth=false \\
  --experimental-encryption-provider-config=/usr/local/k8s/yaml/encryption-config.yaml \\
  --advertise-address=0.0.0.0 \\
  --bind-address=0.0.0.0 \\
  --insecure-bind-address=127.0.0.1 \\
  --secure-port=6443 \\
  --insecure-port=0 \\
  --authorization-mode=Node,RBAC \\
  --runtime-config=api/all \\
  --enable-bootstrap-token-auth \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --service-node-port-range=${NODE_PORT_RANGE} \\
  --tls-cert-file=/usr/local/k8s/cert/kubernetes.pem \\
  --tls-private-key-file=/usr/local/k8s/cert/kubernetes-key.pem \\
  --client-ca-=/usr/local/k8s/cert/ca.pem \\
  --kubelet-client-certificate=/usr/local/k8s/cert/kubernetes.pem \\
  --kubelet-client-key=/usr/local/k8s/cert/kubernetes-key.pem \\
  --service-account-key-file=/usr/local/k8s/cert/ca-key.pem \\
  --etcd-cafile=/usr/local/k8s/cert/ca.pem \\
  --etcd-certfile=/usr/local/k8s/cert/kubernetes.pem \\
  --etcd-keyfile=/usr/local/k8s/cert/kubernetes-key.pem \\
  --etcd-servers=${ETCD_ENDPOINTS} \\
  --enable-swagger-ui=true \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/usr/local/k8s/logs/api-audit.log \\
  --event-ttl=1h \\
  --v=2 \\
  --logtostderr=false \\
  --log-dir=/usr/local/k8s/logs \\
Restart=on-failure
RestartSec=5
Type=notify
User=k8s
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

2.5、分发 systemd uint 文件到 master 节点:

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /lib/systemd/system/kube-apiserver.service root@${master_ip}:/lib/systemd/system
done

2.6、启动 kube-apiserver 服务

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver"
done

2.7、检查 kube-apiserver 运行状态

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl status kube-apiserver"
done

2.8、授予 kubernetes 证书访问 kubelet API 的权限

2.9、打印 kube-apiserver 写入 etcd 的数据

source /usr/local/k8s/bin/environment.sh
ETCDCTL_API=3 etcdctl \
    --endpoints=${ETCD_ENDPOINTS} \
    --cacert=/usr/local/k8s/cert/ca.pem \
    --cert=/etc/etcd/cert/etcd.pem \
    --key=/etc/etcd/cert/etcd-key.pem \
    get /registry/ --prefix --keys-only

2.10、检查集群信息

kubectl cluster-info
kubectl get all --all-namespaces
kubectl get componentstatuses

2.11、检查 kube-apiserver 监听的端口

sudo netstat -lnpt|grep kube

3、部署高可用 kube-controller-manager 集群

3.1、创建 kube-controller-manager 证书和私钥

cat > /usr/local/cfssl/cert/kube-controller-manager-csr.json <<EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "${MASTER_IPS[0]}",
      "${MASTER_IPS[1]}",
      "${MASTER_IPS[2]}"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-controller-manager",
        "OU": "System"
      }
    ]
}
EOF
source /usr/local/k8s/bin/environment.sh
cd /usr/local/cfssl/cert/
cfssl gencert -ca=/usr/local/k8s/cert/ca.pem \
  -ca-key=/usr/local/k8s/cert/ca-key.pem \
  -config=/usr/local/k8s/cert/ca-config.json \
  -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

3.2、分发 kube-controller-manager 证书文件

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /usr/local/cfssl/cert/kube-controller-manager*.pem k8s@${master_ip}:/usr/local/k8s/cert/
done

3.3、创建 kube-controller-manager.kubeconfig 文件

source /usr/local/k8s/bin/environment.sh
cd /usr/local/k8s/conf
kubectl config set-cluster kubernetes \
  --certificate-authority=/usr/local/k8s/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=/usr/local/k8s/cert/kube-controller-manager.pem \
  --client-key=/usr/local/k8s/cert/kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

3.4、分发 kube-controller-manager.kubeconfig 文件

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /usr/local/k8s/conf/kube-controller-manager.kubeconfig k8s@${master_ip}:/usr/local/k8s/conf/
done

3.5、创建 kube-controller-manager systemd unit 文件

source /usr/local/k8s/bin/environment.sh
sudo bash -c "cat > /lib/systemd/system/kube-controller-manager.service" <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/k8s/bin/kube-controller-manager \\
  --address=127.0.0.1 \\
  --master=${KUBE_APISERVER} \\
  --kubeconfig=/usr/local/k8s/conf/kube-controller-manager.kubeconfig \\
  --allocate-node-cidrs=true \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --cluster-cidr=${CLUSTER_CIDR} \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/usr/local/k8s/cert/ca.pem \\
  --cluster-signing-key-file=/usr/local/k8s/cert/ca-key.pem \\
  --experimental-cluster-signing-duration=8760h \\
  --leader-elect=true \\
  --feature-gates=RotateKubeletServercertificate=true \\
  --controllers=*,bootstrapsigner,tokencleaner \\
  --horizontal-pod-autoscaler-use-rest-clients=true \\
  --horizontal-pod-autoscaler-sync-period=10s \\
  --tls-cert-file=/usr/local/k8s/cert/kube-controller-manager.pem \\
  --tls-private-key-file=/usr/local/k8s/cert/kube-controller-manager-key.pem \\
  --service-account-private-key-file=/usr/local/k8s/cert/ca-key.pem \\
  --root-ca-file=/usr/local/k8s/cert/ca.pem \\
  --use-service-account-credentials=true \\
  --v=2 \\
  --logtostderr=false \\
  --log-dir=/usr/local/kubernetes/logs
Restart=on-failure
RestartSec=5
User=k8s

[Install]
WantedBy=multi-user.target
EOF

3.6、分发 systemd unit 文件到所有 master 节点:

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /lib/systemd/system/kube-controller-manager.service root@${master_ip}:/lib/systemd/system/
done

3.7、启动 kube-controller-manager 服务

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"
done

3.8、检查服务运行状态

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh k8s@${master_ip} "systemctl status kube-controller-manager"
    # ssh k8s@${master_ip} "systemctl stop kube-controller-manager"
done

3.9、查看输出的 metric

3.10、测试 kube-controller-manager 集群的高可用

4、部署高可用 kube-scheduler 集群

4.1、创建 kube-scheduler 证书和私钥

cat > /usr/local/cfssl/cert/kube-scheduler-csr.json <<EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "${MASTER_IPS[0]}",
      "${MASTER_IPS[1]}",
      "${MASTER_IPS[2]}"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler",
        "OU": "System"
      }
    ]
}
EOF
source /usr/local/k8s/bin/environment.sh
cd /usr/local/cfssl/cert/
cfssl gencert -ca=/usr/local/k8s/cert/ca.pem \
  -ca-key=/usr/local/k8s/cert/ca-key.pem \
  -config=/usr/local/k8s/cert/ca-config.json \
  -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

3.2、分发 kube-scheduler 证书文件

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /usr/local/cfssl/cert/kube-scheduler*.pem k8s@${master_ip}:/usr/local/k8s/cert/
done

4.3、创建 kube-scheduler.kubeconfig 文件

source /usr/local/k8s/bin/environment.sh
cd /usr/local/k8s/conf
kubectl config set-cluster kubernetes \
  --certificate-authority=/usr/local/k8s/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
  --client-certificate=/usr/local/cfssl/cert/kube-scheduler.pem \
  --client-key=/usr/local/cfssl/cert/kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

4.4、分发 kube-scheduler.kubeconfig 文件

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /usr/local/k8s/conf/kube-scheduler.kubeconfig k8s@${master_ip}:/usr/local/k8s/conf/
done

4.5、创建 kube-scheduler systemd unit 文件

sudo bash -c "cat > /lib/systemd/system/kube-scheduler.service" <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/k8s/bin/kube-scheduler \\
  --address=127.0.0.1 \\
  --master=${KUBE_APISERVER} \\
  --kubeconfig=/usr/local/k8s/conf/kube-scheduler.kubeconfig \\
  --leader-elect=true \\
  --v=2 \\
  --logtostderr=false \\
  --log-dir=/usr/local/kubernetes/logs
Restart=on-failure
RestartSec=5
User=k8s

[Install]
WantedBy=multi-user.target
EOF

4.6、分发 systemd unit 文件到所有 master 节点:

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /lib/systemd/system/kube-scheduler.service root@${master_ip}:/lib/systemd/system/
done

4.7、启动 kube-scheduler 服务

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
done

4.8、检查服务运行状态

source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    ssh k8s@${master_ip} "systemctl status kube-scheduler"
done

4.9、查看输出的 metric

4.10、测试 kube-scheduler 集群的高可用

八、部署 worker 节点

1、部署 flanneld 网络

1.1、下载和分发 flanneld 二进制文件

* 下载最新发布包,到如下页面:https://github.com/coreos/flannel/releases
sudo wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz -P /usr/local/src
sudo mkdir -p /usr/local/src/flanneld
sudo tar -zxvf /usr/local/src/flannel-v0.10.0-linux-amd64.tar.gz -C /usr/local/src/flanneld
source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    scp  /usr/local/src/flanneld/{flanneld,mk-docker-opts.sh} k8s@${worker_ip}:/usr/local/k8s/bin/
    scp /usr/local/src/kubernetes/cluster/centos/node/bin/remove-docker0.sh k8s@${worker_ip}:/usr/local/k8s/bin
    ssh k8s@${worker_ip} "chmod +x /usr/local/k8s/bin/*"
done

1.2、创建 flannel 证书和私钥

cat > /usr/local/cfssl/cert/flanneld-csr.json <<EOF
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
source /usr/local/k8s/bin/environment.sh
cd /usr/local/cfssl/cert/
cfssl gencert -ca=/usr/local/k8s/cert/ca.pem \
  -ca-key=/usr/local/k8s/cert/ca-key.pem \
  -config=/usr/local/k8s/cert/ca-config.json \
  -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

1.3、分发 flannel 证书文件

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    scp /usr/local/cfssl/cert/flanneld*.pem k8s@${worker_ip}:/usr/local/k8s/cert/
done

1.4、向 etcd 写入集群 Pod 网段信息

source /usr/local/k8s/bin/environment.sh
etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/usr/local/k8s/cert/ca.pem \
  --cert-file=/usr/local/k8s/cert/flanneld.pem \
  --key-file=/usr/local/k8s/cert/flanneld-key.pem \
  set ${FLANNEL_ETCD_PREFIX}/config '{"Network":"'${CLUSTER_CIDR}'", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}'

1.5、创建 flanneld 的 systemd unit 文件

source /usr/local/k8s/bin/environment.sh
sudo bash -c "cat > /lib/systemd/system/flanneld.service" << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStartPre=/usr/local/k8s/bin/remove-docker0.sh
ExecStart=/usr/local/k8s/bin/flanneld \\
  -etcd-cafile=/usr/local/k8s/cert/ca.pem \\
  -etcd-certfile=/usr/local/k8s/cert/flanneld.pem \\
  -etcd-keyfile=/usr/local/k8s/cert/flanneld-key.pem \\
  -etcd-endpoints=${ETCD_ENDPOINTS} \\
  -etcd-prefix=${FLANNEL_ETCD_PREFIX} \\
  -iface=${IFACE} \\
  -ip-masq
ExecStartPost=/usr/local/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF

1.6、分发 flanneld systemd unit 文件到所有节点

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    scp /lib/systemd/system/flanneld.service root@${worker_ip}:/lib/systemd/system
done

1.7、启动 flanneld 服务

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh root@${worker_ip} "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld"
done

1.8、检查启动结果

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh k8s@${worker_ip} "systemctl status flanneld"
done

1.9、检查分配给各 flanneld 的 Pod 网段信息

source /usr/local/k8s/bin/environment.sh
etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/usr/local/k8s/cert/ca.pem \
  --cert-file=/usr/local/k8s/cert/flanneld.pem \
  --key-file=/usr/local/k8s/cert/flanneld-key.pem \
  get ${FLANNEL_ETCD_PREFIX}/config

1.10、查看已分配的 Pod 子网段列表(/24):

source /usr/local/k8s/bin/environment.sh
etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/usr/local/k8s/cert/ca.pem \
  --cert-file=/usr/local/k8s/cert/flanneld.pem \
  --key-file=/usr/local/k8s/cert/flanneld-key.pem \
  ls ${FLANNEL_ETCD_PREFIX}/subnets

1.11、查看某一 Pod 网段对应的节点 IP 和 flannel 接口地址:

source /usr/local/k8s/bin/environment.sh
etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/usr/local/k8s/cert/ca.pem \
  --cert-file=/usr/local/k8s/cert/flanneld.pem \
  --key-file=/usr/local/k8s/cert/flanneld-key.pem \
  get ${FLANNEL_ETCD_PREFIX}/subnets/172.30.9.0-24

1.12、验证各节点能通过 Pod 网段互通

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh ${worker_ip} "/usr/sbin/ip addr show flannel.1|grep -w inet"
done
source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh ${worker_ip} "ping -c 1 172.30.24.0"
    ssh ${worker_ip} "ping -c 1 172.30.89.0"
    ssh ${worker_ip} "ping -c 1 172.30.9.0"
done

2、部署 docker 组件

2.1、下载和分发 docker 二进制文件

* 下载最新发布包,到如下页面:https://download.docker.com/linux/static/stable/x86_64/
sudo wget https://download.docker.com/linux/static/stable/x86_64/docker-18.03.1-ce.tgz -P /usr/local/src
sudo tar -xvf /usr/local/src/docker-18.03.1-ce.tgz -C /usr/local/src
source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    scp /usr/local/src/docker/docker* k8s@${worker_ip}:/usr/local/k8s/bin/
    ssh k8s@${worker_ip} "chmod +x /usr/local/k8s/bin/*"
done

2.2、创建 systemd unit 文件

sudo bash -c "cat > /lib/systemd/system/docker.service" <<"EOF"
[Unit]
Descrin=Docker Application Container Engine
Documentation=http://docs.docker.io

[Service]
Type=notify
Environment="PATH=/usr/local/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/local/k8s/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --registry-mirror=https://ovq2rvvh.mirror.aliyuncs.com --max-concurrent-downloads=20 --log-level=error $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

2.3、分发 systemd unit 文件到所有 worker 节点

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    scp /lib/systemd/system/docker.service root@${worker_ip}:/lib/systemd/system/
done

2.4、启动 docker 服务

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh root@${worker_ip} "systemctl stop firewalld && systemctl disable firewalld"
    ssh root@${worker_ip} "iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat"
    ssh root@${worker_ip} "iptables -P FORWARD ACCEPT"
    ssh root@${worker_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"
done

2.5、检查服务运行状态

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh k8s@${worker_ip} "systemctl status docker"
done

2.6、检查 docker0 网桥

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh k8s@${worker_ip} "/usr/sbin/ip addr show"
done

3、部署 kubelet 组件

3.1、下载和分发 kubelet 二进制文件(前面已安装)

sudo wget https://dl.k8s.io/v1.10.4/kubernetes-server-linux-amd64.tar.gz -P /usr/local/src
tar -xzvf /usr/local/src/kubernetes-server-linux-amd64.tar.gz -C /usr/local/src
tar -xzvf /usr/local/src/kubernetes/kubernetes-src.tar.gz -C /usr/loca/src/kubernetes

将二进制文件拷贝到所有 work 节点:

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    scp /usr/local/src/kubernetes/server/bin/{kubelet,kube-proxy} k8s@${worker_ip}:/usr/local/k8s/bin/
    ssh k8s@${worker_ip} "chmod +x /usr/local/k8s/bin/*"
done
source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh root@${worker_ip} "yum install -y conntrack jq"
done

3.2、创建 kubelet-bootstrap.kubeconfig 文件

source /usr/local/k8s/bin/environment.sh
cd /tmp/
for worker_name in ${WORKER_NAMES[@]};do
    echo ">>> ${worker_name}"
    # 创建 token
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:${worker_name} \
      --kubeconfig ~/.kube/config)

    # 设置集群参数
    kubectl config set-cluster kubernetes \
      --certificate-authority=/usr/local/k8s/cert/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kubelet-bootstrap-${worker_name}.kubeconfig

    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-${worker_name}.kubeconfig

    # 设置上下文参数
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-${worker_name}.kubeconfig

    # 设置默认上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-${worker_name}.kubeconfig
done

3.4、分发 kubelet-bootstrap.kubeconfig 文件到所有 worker 节点

source /usr/local/k8s/bin/environment.sh
for worker_name in ${WORKER_NAMES[@]};do
    echo ">>> ${worker_ip}"
    scp /tmp/kubelet-bootstrap-${worker_name}.kubeconfig k8s@${worker_name}:/usr/local/k8s/conf/kubelet-bootstrap.kubeconfig
done

3.5、设置CNI插件支持flannel

* 下载CNI插件
sudo wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz -P /usr/local/src
mkdir -p /usr/local/k8s/bin/cni
tar zxf /usr/local/src/cni-plugins-amd64-v0.7.1.tgz -C /usr/local/k8s/bin/cni
chmod +x /usr/local/k8s/bin/cni/*
sudo mkdir -p /etc/cni/net.d
sudo bash -c "cat > /etc/cni/net.d/10-default.conf" <<EOF
{
    "name": "flannel",
    "type": "flannel",
    "delegate": {
        "bridge": "docker0",
        "isDefaultGateway": true,
        "mtu": 1400
    }
}
EOF
source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh k8s@${worker_ip} "mkdir -p /usr/local/k8s/bin/cni"
    scp /usr/local/k8s/bin/cni/* k8s@${worker_ip}:/usr/local/k8s/bin/cni
    ssh root@${worker_ip} "mkdir -p /etc/cni/net.d"
    scp /etc/cni/net.d/* root@${worker_ip}:/etc/cni/net.d
done

3.5、创建和分发 kubelet 参数配置文件

source /usr/local/k8s/bin/environment.sh
cat > /usr/local/k8s/conf/kubelet.config.json <<EOF
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/usr/local/k8s/cert/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "${WORKER_IPS[0]}",
  "port": 10250,
  "readOnlyPort": 0,
  "cgroupDriver": "cgroupfs",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientcertificate": true,
    "RotateKubeletServercertificate": true
  },
  "clusterDomain": "${CLUSTER_DNS_DOMAIN}",
  "clusterDNS": ["${CLUSTER_DNS_SVC_IP}"]
}
EOF
source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    scp /usr/local/k8s/conf/kubelet.config.json root@${worker_ip}:/usr/local/k8s/conf/kubelet.config.json
    ssh root@${worker_ip} "sed -i 's/${WORKER_IPS[0]}/${worker_ip}/g' /usr/local/k8s/conf/kubelet.config.json"
done

3.6、创建 kubelet systemd unit 文件

sudo bash -c "cat > /lib/systemd/system/kubelet.service" <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/k8s/bin/kubelet \\
  --hostname-override=${WORKER_NAMES[0]} \\
  --bootstrap-kubeconfig=/usr/local/k8s/conf/kubelet-bootstrap.kubeconfig \\
  --config=/usr/local/k8s/conf/kubelet.config.json \\
  --cert-dir=/usr/local/k8s/cert \\
  --network-plugin=cni \\
  --cni-conf-dir=/etc/cni/net.d \\
  --cni-bin-dir=/usr/local/k8s/bin/cni \\
  --fail-swap-on=false \\
  --kubeconfig=/usr/local/k8s/conf/kubelet.kubeconfig \\
  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.0 \\
  --v=2 \\
  --logtostderr=false \\
  --log-dir=/usr/local/kubernetes/logs
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

3.7、分发 kubelet systemd unit 文件到所有 worker 节点

source /usr/local/k8s/bin/environment.sh
for worker_name in ${WORKER_NAMES[@]};do
    echo ">>> ${worker_name}"
    scp /lib/systemd/system/kubelet.service root@${worker_name}:/lib/systemd/system
    ssh root@${worker_name} "sed -i 's/${WORKER_NAMES[0]}/${worker_name}/g' /lib/systemd/system/kubelet.service"
done

3.8、Bootstrap Token Auth 和授予权限

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

3.9、启动 kubelet 服务

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh root@${worker_ip} "mkdir -p /var/lib/kubelet"
    ssh root@${worker_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
done

3.10、检查启动结果

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh k8s@${worker_ip} "systemctl status kubelet"
done

3.11、approve kubelet CSR 请求

* 查看 CSR 列表:
kubectl get csr
kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
kubectl get nodes

* 查看 Approve 结果:
kubectl get csr|awk 'NR==3{print $1}'| xargs kubectl describe csr

* Requesting User:请求 CSR 的用户,kube-apiserver 对它进行认证和授权;
* Subject:请求签名的证书信息;
* 证书的 CN 是 system:node:k8s-02m, Organization 是 system:nodes,kube-apiserver 的 Node 授权模式会授予该证书的相关权限;
* 创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书:
cat > /usr/local/k8s/yaml/csr-crb.yaml <<EOF
 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:bootstrappers" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
* A ClusterRole which instructs the CSR approver to approve a node requesting a
* serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF

* 生效配置:
kubectl delete -f /usr/local/k8s/yaml/csr-crb.yaml
kubectl apply -f /usr/local/k8s/yaml/csr-crb.yaml

* 查看 kublet 的情况
* 等待一段时间(1-10 分钟),三个节点的 CSR 都被自动 approve:
kubectl get csr

* 所有节点均 ready:
kubectl get --all-namespaces -o wide nodes

* kube-controllanager 为各 node 生成了 kubeconfig 文件和公私钥:
cat /usr/local/k8s/conf/kubelet.kubeconfig
ls -l /usr/local/k8s/cert/|grep kubelet

* kubelet-server 证书会周期轮转;

3.11、kubelet 提供的 API 接口

3.12、kublet api 认证和授权

* kublet 配置了如下认证参数:
* authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
* authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证;
* authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;

* 同时配置了如下授权参数:
* authroization.mode=Webhook:开启 RBAC 授权;
* kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized
curl -s --cacert /usr/local/k8s/cert/ca.pem https://127.0.0.1:10250/metrics

curl -s --cacert /usr/local/k8s/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.10.53:10250/metrics
* 通过认证后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(RBAC);

* 证书认证和授权:
* 权限不足的证书;
curl -s --cacert /usr/local/k8s/cert/ca.pem --cert /usr/local/k8s/cert/kube-controller-manager.pem --key /usr/local/k8s/cert/kube-controller-manager-key.pem https://192.168.10.53:10250/metrics

* 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
curl -s --cacert /usr/local/k8s/cert/ca.pem --cert admin.pem --key admin-key.pem https://192.168.10.53:10250/metrics|head

* bear token 认证和授权:
* 创建一个 ServiceAccount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限:
kubectl create sa kubelet-api-test
kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
echo ${TOKEN}

curl -s --cacert /usr/local/k8s/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.10.53:10250/metrics|head

* cadvisor 和 metrics
* cadvisor 统计所在节点各容器的资源(CPU、内存、磁盘、网卡)使用情况,分别在自己的 http web 页面(4194 端口)和 10250 以 promehteus metrics 的形式输出。

* 浏览器访问 http://192.168.10.51:4194/containers/ 可以查看到 cadvisor 的监控页面:
* 浏览器访问 https://192.168.10.51:10250/metrics 和 https://192.168.10.51:10250/metrics/cadvisor 分别返回 kublet 和 cadvisor 的 metrics。

* 注意:
* kublet.config.json 设置 authentication.anonymous.enabled 为 false,不允许匿名证书访问 10250 的 https 服务;
* 参考A.浏览器访问kube-apiserver安全端口.md,创建和导入相关证书,然后访问上面的 10250 端口;

3.13、获取 kublet 的配置

source /usr/local/k8s/bin/environment.sh
* 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
curl -sSL --cacert /usr/local/k8s/cert/ca.pem --cert /usr/local/k8s/cert/admin.pem --key /usr/local/k8s/cert/admin-key.pem https://${MASTER_VIP}:6443/api/v1/nodes/k8s-01m/proxy/configz | jq \
'.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
* 或者参考代码中的注释:https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go
* 参考
* kubelet 认证和授权:https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/

5、部署 kube-proxy 组件

5.1、下载和分发 kube-proxy 二进制文件

5.2、安装 kube-proxy 依赖包

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh root@${worker_ip} "yum install -y ipvsadm ipset iptables && /usr/sbin/modprobe ip_vs "
done

5.3、创建 kube-proxy 证书和私钥

cat > /usr/local/cfssl/cert/kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
source /usr/local/k8s/bin/environment.sh
cd /usr/local/cfssl/cert/
cfssl gencert -ca=/usr/local/k8s/cert/ca.pem \
  -ca-key=/usr/local/k8s/cert/ca-key.pem \
  -config=/usr/local/k8s/cert/ca-config.json \
  -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

5.4、分发 kube-proxy 证书文件

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    scp /usr/local/cfssl/cert/kube-proxy*.pem k8s@${worker_ip}:/usr/local/k8s/cert/
done

5.5、创建 kube-proxy.kubeconfig 文件

source /usr/local/k8s/bin/environment.sh
cd /usr/local/k8s/conf
* 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/usr/local/k8s/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

* 设置客户端认证参数
kubectl config set-credentials kube-proxy \
  --client-certificate=/usr/local/k8s/cert/kube-proxy.pem \
  --client-key=/usr/local/k8s/cert/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
* --embed-certs=true 将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加时,写入的是证书文件路径);

* 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

* 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

5.6、分发 kube-proxy.kubeconfig 文件:

source /usr/local/k8s/bin/environment.sh
for worker_name in ${WORKER_NAMES[@]};do
    echo ">>> ${worker_name}"
    scp /usr/local/k8s/conf/kube-proxy.kubeconfig k8s@${worker_name}:/usr/local/k8s/conf/
done

5.7、创建 kube-proxy 配置文件

cat > /usr/local/k8s/yaml/kube-proxy.config.yaml <<EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: ${WORKER_IPS[0]}
clientConnection:
  kubeconfig: /usr/local/k8s/conf/kube-proxy.kubeconfig
clusterCIDR: ${CLUSTER_CIDR}
healthzBindAddress: ${WORKER_IPS[0]}:10256
hostnameOverride: ${WORKER_IPS[0]}
kind: KubeProxyConfiguration
metricsBindAddress: ${WORKER_IPS[0]}:10249
mode: "ipvs"
EOF
source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /usr/local/k8s/yaml/kube-proxy.config.yaml k8s@${worker_ip}:/usr/local/k8s/yaml
    ssh k8s@${worker_ip} "sed -i 's/${WORKER_IPS[0]}/${worker_ip}/' /usr/local/k8s/yaml/kube-proxy.config.yaml"
done

5.8、创建和分发 kube-proxy systemd unit 文件

source /usr/local/bin/environment.sh
sudo bash -c "cat > /lib/systemd/system/kube-proxy.service" <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/k8s/bin/kube-proxy \\
  --config=/usr/local/k8s/yaml/kube-proxy.config.yaml \\
  --v=2 \\
  --logtostderr=false \\
  --log-dir=/usr/local/kubernetes/logs
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

5.9、分发 kube-proxy systemd unit 文件:

source /usr/local/k8s/bin/environment.sh
for worker_name in ${WORKER_NAMES[@]};do
    echo ">>> ${worker_name}"
    scp /lib/systemd/system/kube-proxy.service root@${worker_name}:/lib/systemd/system
done

5.10、启动 kube-proxy 服务

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh root@${worker_ip} "mkdir -p /var/lib/kube-proxy"
    ssh root@${worker_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
done

5.10、检查启动结果

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh k8s@${worker_ip} "systemctl status kube-proxy"
done

5.11、查看 ipvs 路由规则

source /usr/local/k8s/bin/environment.sh
for worker_ip in ${WORKER_IPS[@]};do
    echo ">>> ${worker_ip}"
    ssh root@${worker_ip} "/usr/sbin/ipvsadm -ln"
done

九、部署集群插件

* 插件是集群的附件组件,丰富和完善了集群的功能。
## 1、部署 coredns 插件
### 1.1、修改配置文件
* 将下载的 kubernetes-server-linux-amd64.tar.gz 解压后,再解压其中的 kubernetes-src.tar.gz 文件。
* coredns 对应的目录是:cluster/addons/dns。
\cp /usr/local/src/kubernetes/cluster/addons/dns/coredns.yaml.base /usr/local/k8s/yaml/coredns.yaml
source /usr/local/k8s/bin/environment.sh
sed -i "s/__PILLAR__DNS__DOMAIN__/${CLUSTER_DNS_DOMAIN}/g" /usr/local/k8s/yaml/coredns.yaml
sed -i "s/__PILLAR__DNS__SERVER__/${CLUSTER_DNS_SVC_IP}/g" /usr/local/k8s/yaml/coredns.yaml

### 1.2、创建 coredns
kubectl delete -f /usr/local/k8s/yaml/coredns.yaml
kubectl create -f /usr/local/k8s/yaml/coredns.yaml

### 1.3、检查 coredns 功能
kubectl -n kube-system get all -o wide
kubectl -n kube-system describe pod coredns
kubectl -n kube-system logs coredns-77c989547b-c9gtx

#使用一个容器测试能否访问baidu.com
kubectl run dns-test --rm -it --image=alpine /bin/sh
kubectl get --all-namespaces pods

* 参考
https://community.infoblox.com/t5/Community-Blog/CoreDNS-for-Kubernetes-Service-Discovery/ba-p/8187 https://coredns.io/2017/03/01/coredns-for-kubernetes-service-discovery-take-2/ https://www.cnblogs.com/boshen-hzb/p/7511432.html https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns

2、部署 dashboard 插件

### 2.1、修改配置文件
* 将下载的 kubernetes-server-linux-amd64.tar.gz 解压后,再解压其中的 kubernetes-src.tar.gz 文件。
* dashboard 对应的目录是:cluster/addons/dashboard。
rm -rf /usr/local/k8s/yaml/dashboard
mkdir -p /usr/local/k8s/yaml/dashboard
\cp -a /usr/local/src/kubernetes/cluster/addons/dashboard/{dashboard-configmap.yaml,dashboard-controller.yaml,dashboard-rbac.yaml,dashboard-secret.yaml,dashboard-service.yaml} /usr/local/k8s/yaml/dashboard
source /usr/local/k8s/bin/environment.sh
sed -i "s@image:.*@image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.8.3@g" /usr/local/k8s/yaml/dashboard/dashboard-controller.yaml
sed -i "/spec/a\  type: NodePort" /usr/local/k8s/yaml/dashboard/dashboard-service.yaml
sed -i "/targetPort/a\    nodePort: 38443" /usr/local/k8s/yaml/dashboard/dashboard-service.yaml

### 2.2、执行所有定义文件
kubectl delete -f /usr/local/k8s/yaml/dashboard
kubectl create -f /usr/local/k8s/yaml/dashboard

### 2.3、查看分配的 NodePort
kubectl -n kube-system get all -o wide
kubectl -n kube-system describe pod kubernetes-dashboard

* NodePort 86射到 dasrd pod 443 端口;
* dashboard 的 --authentication-mode 支持 token、basic,默认为 token。如果使用 basic,则 kube-apiserver 必须配置 '--authorization-mode=ABAC' 和 '--basic-auth-file' 参数。

### 2.4、查看 dashboard 支持的命令行参数
kubectl exec --namespace kube-system -it kubernetes-dashboard-65f7b4f486-wgc6j  -- /dashboard --help

### 2.5、访问 dashboard
* 为了集群安全,从 1.7 开始,dashboard 只允许通过 https 访问,如果使用 kube proxy 则必须监听 localhost 或 127.0.0.1,对于 NodePort 没有这个限制,但是仅建议在开发环境中使用。
* 对于不满足这些条件的登录访问,在登录成功后浏览器不跳转,始终停在登录界面。
* 参考: https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above https://github.com/kubernetes/dashboard/issues/2540
* 三种访问 dashboard 的方式
* 通过 NodePort 访问 dashboard:
* 通过 kubectl proxy 访问 dashboard:
* 通过 kube-apiserver 访问 dashboard;

### 2.6、通过 NodePort 访问 dashboard
* kubernetes-dashboard 服务暴露了 NodePort,可以使用 http://NodeIP:NodePort 地址访问 dashboard;
* 通过火狐浏览器访问:https://192.168.10.52:38443

### 2.7、通过 kubectl proxy 访问 dashboard
* 启动代理:
kubectl proxy --address='localhost' --port=8086 --accept-hosts='^*$'
* --address 必须为 localhost 或 127.0.0.1;
* 需要指定 --accept-hosts 选项,否则浏览器访问 dashboard 页面时提示 “Unauthorized”;
* 浏览器访问 URL:http://127.0.0.1:8086/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

### 2.8、通过 kube-apiserver 访问 dashboard
* 获取集群服务地址列表:
kubectl cluster-info
* 必须通过 kube-apiserver 的安全端口(https)访问 dashbaord,访问时浏览器需要使用自定义证书,否则会被 kube-apiserver 拒绝访问。
* 创建和导入自定义证书的步骤,参考:A.浏览器访问kube-apiserver安全端口
* 浏览器访问 URL:https://192.168.10.50:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/


### 2.8、创建登录 Dashboard 的 token 和 kubeconfig 配置文件
* 上面提到,Dashboard 默认只支持 token 认证,所以如果使用 KubeConfig 文件,需要在该文件中指定 token,不支持使用 client 证书认证。

* 创建登录 token
kubectl create sa dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')
DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')
echo ${DASHBOARD_LOGIN_TOKEN}

* 使用输出的 token 登录 Dashboard。

* 创建使用 token 的 KubeConfig 文件
source /usr/local/k8s/bin/environment.sh
cd /usr/local/k8s/conf
* 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/usr/local/k8s/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=dashboard.kubeconfig

* 设置客户端认证参数,使用上面创建的 Token
kubectl config set-credentials dashboard_user \
  --token=${DASHBOARD_LOGIN_TOKEN} \
  --kubeconfig=dashboard.kubeconfig

* 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=dashboard_user \
  --kubeconfig=dashboard.kubeconfig

* 设置默认上下文
kubectl config use-context default --kubeconfig=dashboard.kubeconfig
* 用生成的 dashboard.kubeconfig 登录 Dashboard。
* 由于缺少 Heapster 插件,当前 dashboard 不能展示 Pod、Nodes 的 CPU、内存等统计数据和图表;
* 参考
https://github.com/kubernetes/dashboard/wiki/Access-control https://github.com/kubernetes/dashboard/issues/2558 https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/

3、部署 heapster 插件

* Heapster是一个收集者,将每个Node上的cAdvisor的数据进行汇总,然后导到第三方工具(如InfluxDB)。
* Heapster 是通过调用 kubelet 的 http API 来获取 cAdvisor 的 metrics 数据的。
* 由于 kublet 只在 10250 端口接收 https 请求,故需要修改 heapster 的 deployment 配置。同时,需要赋予 kube-system:heapster ServiceAccount 调用 kubelet API 的权限。

### 3.1、下载 heapster 文件
* 到 heapster release 页面 下载最新版本的 heapster
sudo wget https://github.com/kubernetes/heapster/archive/v1.5.3.tar.gz -O /usr/local/src/heapster-1.5.3.tar.gz
sudo tar -xzvf /usr/local/src/heapster-1.5.3.tar.gz -C /usr/local/src

### 3.2、修改配置
* 官方文件目录: heapster-1.5.3/deploy/kube-config/influxdb
rm -rf /usr/local/k8s/yaml/heapster
mkdir -p /usr/local/k8s/yaml/heapster
\cp -a /usr/local/src/heapster-1.5.3/deploy/kube-config/influxdb/{grafana.yaml,heapster.yaml,influxdb.yaml} /usr/local/k8s/yaml/heapster
sed -i "s@image:.*@image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v4.4.3@g" /usr/local/k8s/yaml/heapster/grafana.yaml
sed -i "67a\  type: NodePort" /usr/local/k8s/yaml/heapster/grafana.yaml
sed -i "/targetPort/a\    nodePort: 33000" /usr/local/k8s/yaml/heapster/grafana.yaml

sed -i "s@image:.*@image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-amd64:v1.5.3@g" /usr/local/k8s/yaml/heapster/heapster.yaml
* 由于 kubelet 只在 10250 监听 https 请求,故添加相关参数;
sed -i "s@source=.*@source=kubernetes:https://kubernetes.default?kubeletHttps=true\&kubeletPort=10250@g" /usr/local/k8s/yaml/heapster/heapster.yaml

sed -i "s@image:.*@image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.3.3@g" /usr/local/k8s/yaml/heapster/influxdb.yaml

* 将 serviceAccount kube-system:heapster 与 ClusterRole system:kubelet-api-admin 绑定,授予它调用 kubelet API 的权限;
\cp -a /usr/local/src/heapster-1.5.3/deploy/kube-config/rbac/heapster-rbac.yaml /usr/local/k8s/yaml/heapster
cat > /usr/local/k8s/yaml/heapster/heapster-rbac.yaml <<EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster-kubelet-api
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kubelet-api-admin
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system
EOF

### 3.3、执行所有定义文件
kubectl delete -f  /usr/local/k8s/yaml/heapster
kubectl create -f  /usr/local/k8s/yaml/heapster
kubectl apply -f  /usr/local/k8s/yaml/heapster/heapster-rbac.yaml

### 3.4、检查执行结果
kubectl -n kube-system get all -o wide | grep -E 'heapster|monitoring'
kubectl -n kube-system describe pod heapster
kubectl -n kube-system describe pod monitoring
* 检查 kubernets dashboard 界面,可以正确显示各 Nodes、Pods 的 CPU、内存、负载等统计数据和图表:
kubectl -n kube-system get all -o wide
kubectl -n kube-system logs heapster-648964fbdd-2spsz


### 3.5、访问 grafana
### 3.5.1、通过 NodePort 访问:
kubectl get svc -n kube-system|grep -E 'monitoring|heapster'
* grafana 监听 NodePort 8452;
* 浏览器访问 URL:http://192.168.10.52:33000/?orgId=1

### 3.5.2、通过 kube-apiserver 访问:
* 获取 monitoring-grafana 服务 URL:
kubectl cluster-info
* 浏览器访问 URL: https://192.168.10.50:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy

### 3.5.3、通过 kubectl proxy 访问:
* 创建代理
kubectl proxy --address='192.168.10.50' --port=8086 --accept-hosts='^*$'
* 浏览器访问 URL:http://192.168.10.50:8086/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy/?orgId=1

* 参考:
* 配置 heapster:https://github.com/kubernetes/heapster/blob/master/docs/source-configuration.md

4、部署 EFK 插件

### 4.1、修改定义文件
* EFK 对应的目录:kubernetes/cluster/addons/fluentd-elasticsearch
rm -rf /usr/local/k8s/yaml/efk
mkdir -p /usr/local/k8s/yaml/efk
cp -a /usr/local/src/kubernetes/cluster/addons/fluentd-elasticsearch/*.yaml /usr/local/k8s/yaml/efk

sed /usr/local/k8s/yaml/efk/es-statefulset.yaml
- image: longtds/elasticsearch:v5.6.4

sed /usr/local/k8s/yaml/efk/fluentd-es-ds.yaml
image: netonline/fluentd-elasticsearch:v2.0.4


acs-sample/elasticsearch:v6.1.3
acs-sample/fluentd-elasticsearch

### 4.2、给 Node 设置标签
#DaemonSet fluentd-es 只会调度到设置了标签 beta.kubernetes.io/fluentd-ds-ready=true 的 Node,需要在期望运行 fluentd 的 Node 上设置该标签;
kubectl get nodes
kubectl label nodes k8s-03m beta.kubernetes.io/fluentd-ds-ready=true
kubectl describe nodes k8s-03m

### 4.3、执行定义文件
kubectl delete -f /usr/local/k8s/yaml/efk
kubectl create -f /usr/local/k8s/yaml/efk

### 4.4、检查执行结果
kubectl -n kube-system get all  -o wide|grep -E 'elasticsearch|fluentd|kibana'
kubectl -n kube-system  get service |grep -E 'elasticsearch|kibana'

kubectl -n kube-system describe pods elasticsearch
kubectl -n kube-system describe pods fluentd
kubectl -n kube-system describe pods kibana
* kibana Pod 第一次启动时会用**较长时间(0-20分钟)**来优化和 Cache 状态页面,可以 tailf 该 Pod 的日志观察进度:
kubectl -n kube-system -f logs kibana-logging-7445dc9757-pvpcv
* 注意:只有当的 Kibana pod 启动完成后,才能查看 kibana dashboard,否则会提示 refuse。

### 4.5、访问 kibana
* 通过 kube-apiserver 访问:
kubectl cluster-info|grep -E 'Elasticsearch|Kibana'
* 浏览器访问 URL: https://192.168.10.50:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy

* 通过 kubectl proxy 访问:
* 创建代理
kubectl proxy --address='192.168.10.50' --port=8086 --accept-hosts='^*$'
* 浏览器访问 URL:http://192.168.10.50:8086/api/v1/namespaces/kube-system/services/kibana-logging/proxy

* 在 Settings -> Indices 页面创建一个 index(相当于 mysql 中的一个 database),选中 Index contains time-based events,使用默认的 logstash-* pattern,点击 Create ;
* 创建 Index 后,稍等几分钟就可以在 Discover 菜单下看到 ElasticSearch logging 中汇聚的日志;

5、部署 metrics-server 插件

### 5.1、创建 mes-serv书* 创建 metrics-server 证书签名请求:
cat > /usr/local/cfssl/cert/metrics-server-csr.json <<EOF
{
  "CN": "aggregator",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
* 注意: CN 名称为 aggregator,需要与 kube-apiserver 的 --requestheader-allowed-names 参数配置一致;

* 生成 metrics-server 证书和私钥:
cd /usr/local/cfssl/cert/
cfssl gencert -ca=/usr/local/k8s/cert/ca.pem \
  -ca-key=/usr/local/k8s/cert/ca-key.pem  \
  -config=/usr/local/k8s/cert/ca-config.json  \
  -profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server

### 5.2、分发 metrics-server 的证书文件到 master 节点:
source /usr/local/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /usr/local/k8s/cert/metrics-server*.pem k8s@${master_ip}:/usr/local/k8s/cert/
done

### 5.3、修改 kubernetes 控制平面组件的配置以支持 metrics-server
* kube-apiserver添加如下配置参数:
--requestheader-client-ca-file=/usr/local/k8s/cert/ca.pem
--requestheader-allowed-names=""
--requestheader-extra-headers-prefix="X-Remote-Extra-"
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--proxy-client-cert-file=/usr/local/k8s/cert/metrics-server.pem
--proxy-client-key-file=/usr/local/k8s/cert/metrics-server-key.pem
--runtime-config=api/all=true
* --requestheader-XXX、--proxy-client-XXX 是 kube-apiserver 的 aggregator layer 相关的配置参数,metrics-server & HPA 需要使用;
* --requestheader-client-ca-file:用于签名 --proxy-client-cert-file 和 --proxy-client-key-file 指定的证书;在启用了 metric aggregator 时使用;
* 如果 --requestheader-allowed-names 不为空,则--proxy-client-cert-file 证书的 CN 必须位于 allowed-names 中,默认为 aggregator;
* 如果 kube-apiserver 机器没有运行 kube-proxy,则还需要添加 --enable-aggregator-routing=true 参数;

* 关于 --requestheader-XXX 相关参数,参考:
https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/auth.md
https://docs.bitnami.com/kubernetes/how-to/configure-autoscaling-custom-metrics/
* 注意:requestheader-client-ca-file 指定的 CA 证书,必须具有 client auth and server auth;

* kube-controllr-manager添加如下配置参数:
--horizontal-pod-autoscaler-use-rest-clients=true
* 用于配置 HPA 控制器使用 REST 客户端获取 metrics 数据。


### 5.4、修改插件配置文件配置文件
* metrics-server 插件位于 kubernetes 的 cluster/addons/metrics-server/ 目录下。
* 修改 metrics-server-deployment 文件:
mkdir -p /usr/local/k8s/yaml/metrics-server
cp /usr/local/src/kubernetes/cluster/addons/metrics-server/metrics-server-deployment.yaml /usr/local/k8s/yaml/metrics-server
sed
image: image: mirrorgooglecontainers/metrics-server-amd64:v0.2.1
- --source=kubernetes.summary_api:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250
image: siriuszg/addon-resizer:1.8.1
* metrics-server 的参数格式与 heapster 类似。由于 kubelet 只在 10250 监听 https 请求,故添加相关参数;

* 授予 kube-system:metrics-server ServiceAccount 访问 kubelet API 的权限:
cat > /usr/local/k8s/yaml/metrics-server/auth-kubelet.yaml <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:kubelet-api-admin
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kubelet-api-admin
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
EOF
* 新建一个 ClusterRoleBindings 定义文件,授予相关权限;

### 5.5、创建 metrics-server
kubectl create -f /usr/local/k8s/yaml/metrics-server/

### 5.6、查看运行情况
kubectl get pods -n kube-system |grep metrics-server
kubectl get svc -n kube-system|grep metrics-server

### 5.7、查看 metrcs-server 输出的 metrics
* metrics-server 输出的 APIs:https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md

* 通过 kube-apiserver 或 kubectl proxy 访问:
https://192.168.10.50:6443/apis/metrics.k8s.io/v1beta1/nodes
https://192.168.10.50:6443/apis/metrics.k8s.io/v1beta1/pods
https://192.168.10.50:6443/apis/metrics.k8s.io/v1beta1/namespace//pods/

* 直接使用 kubectl 命令访问:
kubectl get --raw apis/metrics.k8s.io/v1beta1/nodes kubectl get --raw apis/metrics.k8s.io/v1beta1/pods kubectl get --raw apis/metrics.k8s.io/v1beta1/nodes/ kubectl get --raw apis/metrics.k8s.io/v1beta1/namespace//pods/

kubectl get --raw "/apis/metrics.k8s.io/v1beta1" | jq .
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "nodes",
      "singularName": "",
      "namespaced": false,
      "kind": "NodeMetrics",
      "verbs": [
        "get",
        "list"
      ]
    },
    {
      "name": "pods",
      "singularName": "",
      "namespaced": true,
      "kind": "PodMetrics",
      "verbs": [
        "get",
        "list"
      ]
    }
  ]
}

kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq .
{
  "kind": "NodeMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
  },
  "items": [
    {
      "metadata": {
        "name": "k8s-03m",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/k8s-03m",
        "creationTimestamp": "2018-06-16T10:24:03Z"
      },
      "timestamp": "2018-06-16T10:23:00Z",
      "window": "1m0s",
      "usage": {
        "cpu": "133m",
        "memory": "1115728Ki"
      }
    },
    {
      "metadata": {
        "name": "k8s-01m",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/k8s-01m",
        "creationTimestamp": "2018-06-16T10:24:03Z"
      },
      "timestamp": "2018-06-16T10:23:00Z",
      "window": "1m0s",
      "usage": {
        "cpu": "221m",
        "memory": "6799908Ki"
      }
    },
    {
      "metadata": {
        "name": "k8s-02m",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/k8s-02m",
        "creationTimestamp": "2018-06-16T10:24:03Z"
      },
      "timestamp": "2018-06-16T10:23:00Z",
      "window": "1m0s",
      "usage": {
        "cpu": "76m",
        "memory": "1130180Ki"
      }
    }
  ]
}
* /apis/metrics.k8s.io/v1beta1/nodes 和 /apis/metrics.k8s.io/v1beta1/pods 返回的 usage 包含 CPU 和 Memory;

* 参考:
https://kubernetes.feisky.xyz/zh/addons/metrics.html
metrics-server RBAC:https://github.com/kubernetes-incubator/metrics-server/issues/40
metrics-server 参数:https://github.com/kubernetes-incubator/metrics-server/issues/25
https://kubernetes.io/docs/tasks/debug-application-cluster/core-metrics-pipeline/

十、kubectl常用命令

#查看集群运行在那些ip上
kubectl cluster-info
#查看所有名称空间的相关状态
kubectl get --all-namespaces -o wide cs
kubectl get --all-namespaces -o wide csr
kubectl get --all-namespaces -o wide csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve

kubectl get --all-namespaces -o wide nodes
kubectl get --all-namespaces -o wide all
kubectl get --all-namespaces -o wide pods
kubectl get --all-namespaces -o wide svc
kubectl get --all-namespaces -o wide deployment
kubectl get --all-namespaces -o wide serviceaccount
kubectl get --all-namespaces -o wide secret
kubectl get --all-namespaces -o wide rc

#查看指定名称空间的相关状态(后面可加-o yaml 查看yaml文件)
NAMESPACE=kube-system
kubectl -n kube-system -o wide get all
kubectl -n kube-system -o wide get nodes
kubectl -n kube-system -o wide get pods
kubectl -n kube-system -o wide get svc
kubectl -n kube-system -o wide get deployment
kubectl -n kube-system -o wide get serviceaccount
kubectl -n kube-system -o wide get secret
kubectl -n kube-system -o wide get rc

#详细查看所有名称空间的相关状态
kubectl describe --all-namespaces all

#详细查看指定名称空间的相关状态(后面可不指定)
NAMESPACE=kube-system
kubectl -n kube-system describe nodes 192.168.10.53
kubectl -n kube-system describe pod kubernetes-dashboard
kubectl -n kube-system describe svc kubernetes-dashboard
kubectl -n kube-system describe deployment kubernetes-dashboard
kubectl -n kube-system describe serviceaccount kubernetes-dashboard
kubectl -n kube-system describe secret admin-user-token
kubectl -n kube-system describe rc

kubectl get clusterroles --namespace=kube-system |grep heapster
kubectl describe clusterroles --namespace=kube-system system:kubelet-api-admin
#查看pod日志
kubectl -n kube-system get pods
kubectl -n kube-system logs pod/kubernetes-dashboard-7b7bf9bcbd-h57xk

#使用自定义的cert
kubectl -n kube-system create secret generic kubernetes-dashboard-certs --from-file=/usr/local/kubernetes/ssl/kube-dashboard.pem --from-file=/usr/local/kubernetes/ssl/kube-dashboard-key.pem

#删除token
kubectl -n kube-system delete secret kubernetes-dashboard-key
#删除所有pods
kubectl -n kube-system delete --all pods
#删除集群node节点
kubectl delete node 192.168.10.52

十一、kubernetes各节点起停命令

#master节点服务
#启动
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"

done
systemctl daemon-reload
systemctl restart etcd
systemctl restart keepalived
systemctl restart kube-apiserver
systemctl restart kube-controller-manager
systemctl restart kube-scheduler
#查看状态
systemctl daemon-reload
systemctl status etcd
systemctl status keepalived
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler
#检查日志
journalctl -f -t etcd
journalctl -f -u etcd
journalctl -f -u keepalived
journalctl -f -u kube-apiserver
journalctl -f -u kube-controller-manager
journalctl -f -u kube-scheduler
#停止
systemctl daemon-reload
systemctl stop etcd
systemctl stop keepalived
systemctl stop kube-apiserver
systemctl stop kube-controller-manager
systemctl stop kube-scheduler

#node节点服务
#启动
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet
systemctl restart kube-proxy
systemctl restart flanneld

#查看状态
systemctl daemon-reload
systemctl status docker
systemctl status kubelet
systemctl status kube-proxy
systemctl status flanneld

#检查日志
journalctl -f -u docker
journalctl -f -u kubelet
journalctl -f -u kube-proxy
journalctl -f -u flanneld

#停止
systemctl daemon-reload
systemctl stop docker
systemctl stop kubelet
systemctl stop kube-proxy
systemctl stop flanneld

for i in etcd kube-apiserver kube-controller-manager kube-scheduler;do
systemctl disable $i
done

for i in docker kubelet kube-proxy flanneld;do
systemctl disable $i
done

for i in 192.168.10.52 192.168.10.53;do
#scp /lib/systemd/system/kube* $i:/lib/systemd/system
#scp /lib/systemd/system/flanneld* $i:/lib/systemd/system
#scp /lib/systemd/system/etcd* $i:/lib/systemd/system
scp -r /usr/local/kubernetes/cfg $i:/usr/local/kubernetes
done

etcdctl \
  --ca-file=/etc/etcd/ssl/ca.pem \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
--endpoints=https://192.168.10.51:2379,https://192.168.10.52:2379,https://192.168.10.53:2379 \
cluster-health

etcdctl \
  --ca-file=/etc/etcd/ssl/ca.pem \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
--endpoints=https://192.168.10.51:2379,https://192.168.10.52:2379,https://192.168.10.53:2379 \
member list

#查看集群Pod网段(/16)
etcdctl \
  --ca-file=/etc/etcd/ssl/ca.pem \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
get /kubernetes/network/config
#查看已分配的Pod子网段列表(/24)
etcdctl \
  --ca-file=/etc/etcd/ssl/ca.pem \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
ls /kubernetes/network/subnets
* 查看某一Pod网段对应的flanneld进程监听的IP和网络参数
etcdctl \
  --ca-file=/etc/etcd/ssl/ca.pem \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
get /kubernetes/network/subnets/10.2.53.0-24

十二、清理集群

## 1、清理 Node 节点
* 停相关进程:
sudo systemctl stop kubelet kube-proxy flanneld docker
* 清理文件:
* umount kubelet 挂载的目录
mount | grep '/var/lib/kubelet'| awk '{print $3}'|xargs sudo umount
* 删除 kubelet 工作目录
sudo rm -rf /var/lib/kubelet
* 删除 flanneld 写入的网络配置文件
sudo rm -rf /var/run/flannel/
* 删除 docker 工作目录
sudo rm -rf /var/lib/docker
* 删除 docker 的一些运行文件
sudo rm -rf /var/run/docker/
* 清理 kube-proxy 和 docker 创建的 iptables:
sudo iptables -F && sudo iptables -X && sudo iptables -F -t nat && sudo iptables -X -t nat
* 删除 flanneld 和 docker 创建的网桥:
ip link del flannel.1
ip link del docker0
* 删除 systemd unit 文件
sudo rm -rf /etc/systemd/system/{kubelet,docker,flanneld}.service
* 删除证书文件
sudo rm -rf /etc/flanneld/cert /usr/local/k8s/cert
* 删除程序文件
sudo rm -rf /usr/local/k8s/bin/*

## 2、清理 Master 节点
* 停相关进程:
sudo systemctl stop kube-apiserver kube-controller-manager kube-scheduler
* 清理文件:
* 删除 kube-apiserver 工作目录
sudo rm -rf /var/run/kubernetes
* 删除 systemd unit 文件
sudo rm -rf /etc/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service
* 删除证书文件
sudo rm -rf /etc/flanneld/cert /usr/local/k8s/cert
* 删除程序文件
sudo rm -rf /usr/local/k8s/bin/{kube-apiserver,kube-controller-manager,kube-scheduler}

## 3、清理 etcd 集群
* 停相关进程:
sudo systemctl stop etcd
* 清理文件:
* 删除 etcd 的工作目录和数据目录
sudo rm -rf /var/lib/etcd
* 删除 systemd unit 文件
sudo rm -rf /etc/systemd/system/etcd.service
* 删除 x509 证书文件
sudo rm -rf /etc/etcd/cert/*
* 删除程序文件
sudo rm -rf /usr/local/k8s/bin/etcd
上一篇 下一篇

猜你喜欢

热点阅读