Kubernetes深度实践(二)

2022-03-06  本文已影响0人  哦呵呵_3579

虽然现在很多的工具都能很便捷的搭建出一个生产级别的Kubernetes集群,但还是建议刚开始接触的朋友从二进制文件的方式搭建一下集群,这样可以更深入的理解Kubernetes。下面是我之前搭建的时候写的一些文档,当时才16版本,比较旧了,但基本的东西还是差不多的,这里面我把证书、认证这些讲的细致一些,方便大家理解。

kubernetes 1.16 二进制搭建

准备工作

节点

master 10.100.103.44
worker1 10.100.103.45
worker2 10.100.103.46

所有需要的文件

https://pan.baidu.com/s/1EzRtnKSwQ3AcX4KE6z2Vlg 提取码: 6ven

证书说明

证书应该算是搭建过程中最麻烦的事情,需要先理一下需要哪些证书

etcd集群

1、etcd对外提供服务,需要一套etcd server证书
2、etcd各节点之间需要进行相互通信,需要一套etcd peer证书
3、kube-apiserver和flannel访问etcd,需要一套etcd client证书

kubernetes集群

1、kube-apiserver对外提供服务,需要一套kube-apiserver的server证书
2、kubectl访问kube-apiserver的具有admin权限的client证书
3、kube-controller-manager访问kube-apiserver,需要一套client证书
4、kube-scheduler访问kube-apiserver,需要一套client证书
5、kube-proxy访问kube-apiserver,需要一套client证书
6、kubelet对外提供服务,需要一套server证书
7、kube-apiserver访问kubelet,需要一套kubelet的client证书

由于访问kube-apiserver的每个组件权限不同,所以其对应的client证书都是独立的,需要单独制作。其中kubelet的证书比较麻烦,由于每个kubelet证书都是唯一的(绑定其ip),如果每个节点都单独制作的话非常麻烦,所以在kubernetes 1.4版本引入了专门签署证书的api TLS bootstrapping来实现kubelet证书的动态签发,因此可以先不考虑kubelet的证书。
工作原理如下:
Kubelet第一次启动的时候,先用同一个bootstrap token作为凭证。这个token已经被提前设置为隶属于用户组system:bootstrappers,并且这个用户组的权限也被限定为只能用来申请证书。 用这个bootstrap token通过认证后,kubelet申请到属于自己的两套证书(kubelet server、kube-apiserver client for kubelet),申请成功后,再用属于自己的证书做认证,从而拥有了kubelet应有的权限。这样一来,就去掉了手动为每个kubelet准备证书的过程,并且kubelet的证书还可以自动轮替更新。

总结

创建并维护一套kubernetes的正式是非常麻烦的事情,网上有很多简化的制作证书的方式,为了能尽可能的搞清整个证书和认证体系,本文的签名方式全部按照正规的方式来实现。

搭建etcd集群

etcd是整个k8s集群的数据持久化中心,需要最先启动,考虑到etcd的重要性,建议以集群的方式来部署。
etcd需要三套基本的证书:
A、server证书,用于提供对外的服务
B、集群之间通信的peer证书
C、client证书,用于访问server接口
1、创建etcd ca配置文件

mkdir /etc/cert/etcd && cd /etc/cert/etcd

cat >etcd-ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "peer": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      },
      "client": {
        "usages": [
            "signing",
            "key encipherment",
            "client auth"
        ],
        "expiry": "87600h"
      },
      "server": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

该配置中包含了三个profile,分别是peer--用于签署对证书,server--用于签署server端证书,client--用于签署客户端证书。
2、创建ca证书签名请求

cat > etcd-ca-csr.json <<EOF
{
  "CN": "etcd-ca",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing"
    }
  ]
}
EOF

3、生成ca证书和私钥

cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca

4、创建server证书签名请求文件
服务端证书需要指定使用的ip等信息,所以需要将节点的ip地址全部写入,如果需要添加节点则需要重新制作证书。

cat > kube-etcd-server-csr.json <<EOF
{
  "CN": "kube-etcd-server",
  "hosts": [
    "127.0.0.1",
    "localhost",
    "10.100.103.44",
    "10.100.103.45",
    "10.100.103.46"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing"
    }
  ]
}
EOF

5、生成server证书
由于是server证书,profile需要指定为server

cfssl gencert -ca=etcd-ca.pem \
    -ca-key=etcd-ca-key.pem \
    -config=etcd-ca-config.json \
    -profile=server kube-etcd-server-csr.json | cfssljson -bare kube-etcd-server

6、创建集群间通信的peer证书签名请求文件
该证书会被默认的2379端口使用,即是服务端证书也是客户端证书,需要将所有的节点IP写入,后续修改需要重新制作证书

cat > kube-etcd-peer-csr.json <<EOF
{
  "CN": "kube-etcd-peer",
  "hosts": [
    "127.0.0.1",
    "localhost",
    "10.100.103.44",
    "10.100.103.45",
    "10.100.103.46"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing"
    }
  ]
}
EOF

7、生成peer证书
实际上peer证书和server证书可以使用不同的ca来进行签名,但没有什么特别需求的话使用一套即可,避免出现混乱。

cfssl gencert -ca=etcd-ca.pem \
    -ca-key=etcd-ca-key.pem \
    -config=etcd-ca-config.json \
    -profile=peer kube-etcd-peer-csr.json | cfssljson -bare kube-etcd-peer

8、创建client证书签名请求文件

cat > kube-etcd-peer-csr.json <<EOF
{
  "CN": "kube-etcd-client",
  "hosts": [
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing"
    }
  ]
}
EOF

9、生成证书

cfssl gencert -ca=etcd-ca.pem \
    -ca-key=etcd-ca-key.pem \
    -config=etcd-ca-config.json \
    -profile=client kube-etcd-client-csr.json | cfssljson -bare kube-etcd-client

10、将证书分发至各节点

scp * root@10.100.103.45:/etc/cert/etcd
scp * root@10.100.103.46:/etc/cert/etcd

etcd证书创建完毕,接下来准备启动etcd集群
11、将组件包下载解压缩并上传至节点

#解压缩etcd包
tar -zxvf etcd*
cd etcd-v3.3.17-linux-amd64
#赋予运行权限
chmod +x etcd *
#复制到对应文件夹
cp etcd * /usr/bin

12、编写etcd.service

cat > /usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/usr/bin
# User=etcd
ExecStart=/usr/bin/etcd \
  --name etcd0 \
  --cert-file=/etc/cert/etcd/kube-etcd-server.pem \
  --key-file=/etc/cert/etcd/kube-etcd-server-key.pem \
  --trusted-ca-file=/etc/cert/etcd/etcd-ca.pem \
  --peer-cert-file=/etc/cert/etcd/kube-etcd-peer.pem \
  --peer-key-file=/etc/cert/etcd/kube-etcd-peer-key.pem \
  --peer-trusted-ca-file=/etc/cert/etcd/etcd-ca.pem \
  --peer-client-cert-auth=true \
  --peer-auto-tls=true \
  --initial-advertise-peer-urls https://10.100.103.44:2380 \
  --listen-peer-urls https://10.100.103.44:2380 \
  --listen-client-urls https://10.100.103.44:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://10.100.103.44:2379 \
  --initial-cluster-token kube-etcd-cluster \
  --initial-cluster etcd0=https://10.100.103.44:2380,etcd1=https://10.100.103.45:2380,etcd2=https://10.100.103.46:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd

14、确认etcd状态

systemctl status etcd
etcdctl --cert-file=/etc/cert/etcd/kube-etcd-client.pem --key-file=/etc/cert/etcd/kube-etcd-client-key.pem --ca-file=/etc/cert/etcd/etcd-ca.pem member list

10、至此etcd集群搭建完毕,一般etcd集群搭建不会有坑,但是etcd的接口是有坑的,etcd接口分为v2和v3版本,组件包中etcdctl默认访问v2版本,两个接口数据并不互通,可以通过设置

export ETCDCTL_API=3

来修改对应访问的接口,特别是flannel访问的是v2的接口,apiserver访问的则是v3的接口,不同版本的接口会有些微的差异,这个一定要清楚!

搭建flannel

flannel为cni组件,用于实现跨节点的通信,flannel的原理相对来说比较简单,flannel启动之后会监听etcd的目录,并且在指定的网段中分配一段,使用flannel分配的网段,这样就保证了每个节点中docker的网段全都是不同的,不存在重复的ip,并且flannel会在route中生成对应的路由规则,如果该节点上的一个pod需要访问其他节点上的接口,数据就会按照route的规则发到对应的节点
下方的操作需要在每个节点上都操作一遍
由于flannel需要监听etcd,所以需要使用etcd的client证书
1、解压缩flannel并将其移动到对应的位置

tar -zxvf flannel-v0.11.0-linux-amd64.tar.gz
chmod +x flanneld
chmod +x mk-docker-opts.sh
cp flanneld /usr/bin
cp mk-docker-opts.sh /usr/bin

2、向etcd中写入flannel使用的网段信息

#先将etcd接口版本切换至v2
export ETCDCTL_API=2

etcdctl --cert-file=/etc/cert/etcd/kube-etcd-client.pem --key-file=/etc/cert/etcd/kube-etcd-client-key.pem --ca-file=/etc/cert/etcd/etcd-ca.pem mkdir /kubernetes/network

etcdctl --cert-file=/etc/cert/etcd/kube-etcd-client.pem --key-file=/etc/cert/etcd/kube-etcd-client-key.pem --ca-file=/etc/cert/etcd/etcd-ca.pem mk /kubernetes/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

2、编写flanneld.service

cat > /usr/lib/systemd/system/flanneld.service <<EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/usr/bin/flanneld \
  -etcd-cafile=/etc/cert/etcd/etcd-ca.pem \
  -etcd-certfile=/etc/cert/etcd/kube-etcd-client.pem \
  -etcd-keyfile=/etc/cert/etcd/kube-etcd-client-key.pem \
  -etcd-endpoints=https://10.100.103.44:2379,https://10.100.103.45:2379,https://10.100.103.46:2379 \
  -etcd-prefix=/kubernetes/network
ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF

3、启动flannel

systemctl enable flanneld
systemctl start flanneld

4、确认状态
可以看到多出来了flannel.1设备,并且所使用的IP为写入到etcd中网段信息的一个子网

systemctl status flanneld
ip a

5、配置docker使用flannel网络

vi /etc/systemd/system/multi-user.target.wants/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

添加EnvironmentFile字段,修改ExecStart字段
6、重启docker并确认状态

systemctl daemon-reload
systemctl restart docker

ip a

可看到docker是flannel子网

搭建master节点

手动安装的方式其实没有严格意义上的master与worker的概念,只要安装了kubelet就是一个worker节点,而master只是意味着安装了kube-apiserver、kube-controller-manager以及kube-schedule服务而已,这跟使用kubeadm直接安装的集群还是有区别的。

创建k8s集群使用的ca证书

mkdir /etc/cert/kubernetes =p
cd /etc/cert/kubernetes
##创建签名配置文件
cat > k8s-ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "peer": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      },
      "client": {
        "usages": [
            "signing",
            "key encipherment",
            "client auth"
        ],
        "expiry": "87600h"
      },
      "server": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF


## 创建ca签名请求
cat > k8s-ca-csr.json <<EOF
{
  "CN": "kubernetes-ca",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing"
    }
  ]
}
EOF

##生成ca证书
cfssl gencert -initca k8s-ca-csr.json | cfssljson -bare k8s-ca

## 将证书分发到master节点
scp /etc/cert/kubernetes/* root@10.100.103.45:/etc/cert/kubernetes
scp /etc/cert/kubernetes/* root@10.100.103.46:/etc/cert/kubernetes

搭建kube-apiserver

kube-apiserver是一个无状态服务,所以其高可用性可以通过nginx或者keepalive的方式来实现,同一时间只有一个kube-apiserver处于工作状态。
同时它是整个集群的入口,需要优先启动。apiserver提供了供其他组件访问的接口,并且持续监听etcd,所有它需要一套自己使用的server证书以及一套etcd的client证书。
1、安装二进制文件

tar -zxvf kubernetes-server-linux-amd64.tar.gz && cd kubernetes/server/bin
chmod +x kube-controller-manager kube-scheduler kube-apiserver
cp kube-apiserver /usr/bin
cp kube-controller-manager /usr/bin
cp kube-scheduler /usr/bin

2、创建server证书

## 创建server签名请求,hosts中需要将所有的master ip负载均衡器IP以及公网IP全部写入,同时还写入了cluster service ip range用到的首个IP(我是用到的cluster service ip段是20.0.0.0/24,只要不与宿主机上IP端重复即可),当apiserver启动的时候会自动创建一个ip地址为20.0.0.1的service作为默认网关。

cat > /etc/cert/kubernetes/k8s-apiserver-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "10.100.103.44",
    "10.100.103.45",
    "10.100.103.46",
    "10.100.103.47",
    "20.0.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing"
    }
  ]
}
EOF

##生成服务端证书
cfssl gencert -ca=/etc/cert/kubernetes/k8s-ca.pem \
  -ca-key=/etc/cert/kubernetes/k8s-ca-key.pem \
  -config=/etc/cert/kubernetes/k8s-ca-config.json \
  -profile=server k8s-apiserver-csr.json | cfssljson -bare k8s-apiserver

3、创建kubelet授权token文件

cat > /etc/cert/kubernetes/token.csv <<EOF
0b98a600d1a0fcc29d23139dadfbe0c0,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF
格式:token,用户,uid,用户组,其中token可以自行替换
head -c 16 /dev/urandom | od -An -t x | tr -d ' '

4、分发证书和token文件

scp /etc/cert/kubernetes/* root@10.100.103.45:/etc/cert/kubernetes/
scp /etc/cert/kubernetes/* root@10.100.103.46:/etc/cert/kubernetes/

5、编写service文件

cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server

[Service]
ExecStart=/usr/bin/kube-apiserver \
    --logtostderr=false \
    --v=2 \
    --log-dir=/var/log/kubernetes \
    --etcd-servers=https://10.100.103.44:2379,https://10.100.103.45:2379,https://10.100.103.46:2379 \
    --bind-address=10.100.103.44 \
    --secure-port=8443 \
    --insecure-port=0 \
    --advertise-address=10.100.103.44 \
    --allow-privileged=true \
    --service-cluster-ip-range=20.0.0.0/24 \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
    --authorization-mode=RBAC,Node \
    --enable-bootstrap-token-auth=true \
    --token-auth-file=/etc/certkubernetes/token.csv \
    --service-node-port-range=30000-32767 \
    --tls-cert-file=/etc/cert/kubernetes/k8s-apiserver.pem  \
    --tls-private-key-file=/etc/cert/kubernetes/k8s-apiserver-key.pem \
    --client-ca-file=/etc/cert/kubernetes/k8s-ca.pem \
    --service-account-key-file=/etc/cert/kubernetes/k8s-ca-key.pem \
    --etcd-cafile=/etc/cert/etcd/etcd-ca.pem \
    --etcd-certfile=/etc/cert/etcd/kube-etcd-client.pem \
    --etcd-keyfile=/etc/cert/etcd/kube-etcd-client-key.pem \
    --apiserver-count=3 \
    --audit-log-maxage=30 \
    --audit-log-maxbackup=3 \
    --audit-log-maxsize=100 \
    --audit-log-path=/var/log/kubernetes/k8s-audit.log

Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

##每个不同的节点需要修改ip地址
##日志文件目录需要提前创建

6、启动服务并确认状态

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

配置kubectl

kubectl 是一个kube-apiserver客户端,需要kube-apiserver的客户端证书。它可以通过命令行的方式实现对kube-apiserver接口的访问,kubect需要拥有admin权限。
1、生成client证书

##创建请求
cat > /etc/cert/kubernetes/kubectl-csr.json <<EOF
{
  "CN": "kubernetes-admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters"
    }
  ]
}
EOF

##生成证书
cfssl gencert -ca=/etc/cert/kubernetes/k8s-ca.pem \
  -ca-key=/etc/cert/kubernetes/k8s-ca-key.pem \
  -config=/etc/cert/kubernetes/k8s-ca-config.json \
  -profile=client kubectl-csr.json | cfssljson -bare kubectl

2、生成配置文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/cert/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.100.103.44:8443 \
  --kubeconfig=kubectl.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubernetes-admin \
  --client-certificate=kubectl.pem \
  --client-key=kubectl-key.pem \
  --embed-certs=true \
  --kubeconfig=kubectl.kubeconfig

# 设置上下文参数
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=kubernetes-admin \
  --kubeconfig=kubectl.kubeconfig
  
# 设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

3、将配置文件复制到需要使用kubectl的节点


搭建kube-controller-manager

kube-controller-manager是一个有状态服务,在一个高可用集群中同一时间只有一个kube-controller-manager处于运行状态,其他实例处于阻塞状态,多个实例之间通过从kube-apiserver获取租借锁的方式实现选举,获取到的就成为集群的leader。
1、生成client证书

##创建请求
cat > /etc/cert/kubernetes/k8s-controller-manager-csr.json <<EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
    ],
    "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "system:kube-controller-manager"
        }
    ]
}
EOF

##生成证书
cfssl gencert -ca=/etc/cert/kubernetes/k8s-ca.pem \
  -ca-key=/etc/cert/kubernetes/k8s-ca-key.pem \
  -config=/etc/cert/kubernetes/k8s-ca-config.json \
  -profile=client k8s-controller-manager-csr.json | cfssljson -bare k8s-controller-manager

2、生成配置文件

##设置集群参数
##server为kube-apiserver地址,填写负载均衡的地址,由于本例未搭建负载均衡,所以直接使用了master的地址
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/cert/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.100.103.44:8443 \
  --kubeconfig=kube-controller-manager.kubeconfig
  
##设置客户端认证参数
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=k8s-controller-manager.pem \
--client-key=k8s-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig

##设置上下文参数
kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig
  
##设置默认上下文
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

4、分发配置文件

scp kube-controller-manager.kubeconfig root@10.100.103.45:/etc/cert/kubernetes/kube-controller-manager.kubeconfigscp kube-controller-manager.kubeconfig root@10.100.103.46:/etc/cert/kubernetes/kube-controller-manager.kubeconfig

3、编写service文件

cat > /usr/lib/systemd/system/kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/bin/kube-controller-manager \
  --leader-elect=true \
  --address=127.0.0.1 \
  --port=0 \
  --service-cluster-ip-range=20.0.0.0/24 \
  --cluster-cidr=20.244.0.0/16 \
  --cluster-signing-cert-file=/etc/cert/kubernetes/k8s-ca.pem \
  --cluster-signing-key-file=/etc/cert/kubernetes/k8s-ca-key.pem \
  --kubeconfig=/etc/cert/kubernetes/kube-controller-manager.kubeconfig \
  --authentication-kubeconfig=/etc/cert/kubernetes/kube-controller-manager.kubeconfig \
  --experimental-cluster-signing-duration=8760h \
  --root-ca-file=/etc/cert/kubernetes/k8s-ca.pem \
  --client-ca-file=/etc/cert/kubernetes/k8s-ca.pem \
  --service-account-private-key-file=/etc/cert/kubernetes/k8s-ca-key.pem \
  --tls-cert-file=/etc/cert/kubernetes/k8s-controller-manager.pem \
  --tls-private-key-file=/etc/cert/kubernetes/k8s-controller-manager-key.pem \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

4、启动并确认

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

搭建kube-scheduler

kube-scheduler和kube-controller-manager类似,同样是一个有状态服务,选举的机制也跟kube-controller-manager一样,同样是需要访问kube-apiserver的接口,只需要kube-apiserver的client证书即可。
1、生成client证书

##创建请求
cat > /etc/cert/kubernetes/k8s-scheduler-csr.json <<EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler"
      }
    ]
}
EOF

##生成证书
cfssl gencert -ca=/etc/cert/kubernetes/k8s-ca.pem \
  -ca-key=/etc/cert/kubernetes/k8s-ca-key.pem \
  -config=/etc/cert/kubernetes/k8s-ca-config.json \
  -profile=client k8s-scheduler-csr.json | cfssljson -bare k8s-scheduler

2、生成配置文件

  kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/cert/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.100.103.44:8443 \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
  --client-certificate=k8s-scheduler.pem \
  --client-key=k8s-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

3、将配置文件分发到对应的节点

scp kube-scheduler.kubeconfig root@10.100.103.45:/etc/cert/kubernetes/kube-scheduler.kubeconfig
scp kube-scheduler.kubeconfig root@10.100.103.46:/etc/cert/kubernetes/kube-scheduler.kubeconfig

4、编写service文件

cat > /usr/lib/systemd/system/kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/bin/kube-scheduler \
  --address=127.0.0.1 \
  --port=0 \
  --kubeconfig=/etc/cert/kubernetes/kube-scheduler.kubeconfig
  --leader-elect=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

5、启动并确认

systemctl daemon-reload
systemctl enabel kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

到这儿其实Kubernetes就群已经可以运行了,只是缺少kubelet来进行最后的调度工作,但是APIServer的接口已经可以正常调用了。实际上ETCD、apiserver、scheduler、controller这几个核心服务跟容器本身没有什么关系,具体的容器调度和运行工作还是由kubelet来实现的。市面上的一些工具虽然好用,但是过于黑盒,屏蔽了很多细节,如果是想要深入的去理解Kubernetes,我建议还是要通过这种手动搭建的方式去理解一下。

搭建node节点

搭建kubelet

kubelet是每个工作节点上的核心代理,它负责维护其节点上所有pod的生命周期,并且与kube-apiserver进行通信,同步状态,为了节点安全起见,每个kubelet证书需要绑定其IP,这样可以确保集群的安全。
kube-apiserver也会调用kubelet的接口,所以kubelet和kube-apiserver是通过双向认证的方式来保证通信的安全的。
由于上述原因,每个kubelet代理的证书都是独一无二的,在大型集群中如果每个节点都需要单独制作证书的话会非常麻烦,所以kubernetes通过bootstrap的方式来实现对kubelet证书的动态签发。
1、生成bootstrap文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/cert/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.100.103.44:8443 \
  --kubeconfig=bootstrap.kubeconfig
# # 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=0b98a600d1a0fcc29d23139dadfbe0c0 \
  --kubeconfig=bootstrap.kubeconfig

# # 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
  
# # 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

2、添加kubelet配置文件

cat > /etc/cert/kubernetes/kubelet-config.yml <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 20.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
    anonymous:
        enabled: false
    webhook:
        cacheTTL: 2m0s
        enabled: true
    x509:
        clientCAFile: /etc/cert/kubernetes/k8s-ca.pem

authorization:
    mode: Webhook
    webhook:
        cacheAuthorizedTTL: 5m0s
        cacheUnauthorizedTTL: 30s
evictionHard:
    imagefs.available: 15%
    memory.available: 100Mi
    nodefs.available: 10%
    nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

2、编写service文件

cat > /usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/bin/kubelet \
  --bootstrap-kubeconfig=/etc/cert/kubernetes/bootstrap.kubeconfig \
  --cert-dir=/etc/cert/kubernetes \
  --kubeconfig=/etc/cert/kubernetes/kubelet.kubeconfig \
  --config=/etc/cert/kubernetes/kubelet-config.yml \
  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2

Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

## 日志以及WorkingDirectory目录需要提前创建好

3、启动kubelet

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

4、通过kubectl允许kubelet节点加入集群

kubectl get csr

## 将状态为pedding的kubelet加入集群

kubectl certificate approve node-csr-XXXXXXXXX

5、常见问题
Q:kubelet启动的时候出现“User "kubelet-bootstrap" cannot create resource "certificatesigningrequests" in API group "certifica”
A:先确认该用户是否有被绑定至对应的clusterrole

kubectl get clusterrolebinding
kubectl describe clusterrolebinding kubelet-bootstrap
## 如果没有对应的clusterrolebinding直接创建即可
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
##如果存在的情况下还有问题可以将原有的clusterrolebinding删除之后再重新添加

搭建kube-proxy

kube-proxy并不是每个kubelet节点都需要搭建,只有需要将服务暴露到宿主机的时候才需要增加kube-proxy,对正常的pod之间的访问没有影响,所以视情况在对应的节点添加即可,但是一个kube-proxy对应一个kubelet,kube-proxy无法独立运行。
1、生成apiserver客户端证书

cat > /etc/cert/kubernetes/kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing"
    }
  ]
}
EOF

cfssl gencert -ca=/etc/cert/kubernetes/k8s-ca.pem \
  -ca-key=/etc/cert/kubernetes/k8s-ca-key.pem \
  -config=/etc/cert/kubernetes/k8s-ca-config.json \
  -profile=client  kube-proxy-csr.json | cfssljson -bare kube-proxy

2、将证书信息写入config文件中

 kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/cert/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.100.103.44:8443 \
  --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

3、将配置文件分发至对应的节点

scp kube-proxy.kubeconfig root@10.100.103.45:/etc/cert/kubernetes/kube-proxy.kubeconfig
scp kube-proxy.kubeconfig root@10.100.103.46:/etc/cert/kubernetes/kube-proxy.kubeconfig

4、编写kube-proxy.service文件

cat > /usr/lib/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy
After=network.service
Requires=network.service

[Service]
ExecStart=/usr/bin/kube-proxy \
  --kubeconfig=/etc/cert/kubernetes/kube-proxy.kubeconfig \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2

Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

5、启动服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy

至此基本的k8s集群已经搭建完成,已经可以进行服务的部署,但是集群还缺少dns的解析功能,这时候集群内部没有域名解析的能力,只能通过ip的方式进行访问,因此还需要安装coredns来实现集群内部的域名解析域名解析功能。

安装coredns

1、创建coredns.yml

cat > coredns.yml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local. in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: coredns/coredns:1.0.6
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 20.0.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
EOF

2、部署

kubectl apply -f coredns.yml

3、注意点
A、coredns.yml文件中clusterIP所使用的的网段必须是cluster service ip range中的IP
B、coredns.yml文件中clusterIP需要与kubelet配置文件中的clusterDNS对应

总结

1、各个组件使用的kubeconfig文件其实是可以直接自己写的,不需要通过kubectl config的方式来生成
2、所有的证书的CN可以修改为自己想要的名称,只需要修改集群对应的clusterrolebinding关系即可

上一篇下一篇

猜你喜欢

热点阅读