容器云

在CentOS 7上安装部署Kubernetes V1.10集群

2018-04-24  本文已影响230人  LanK陈

在CentOS 7上安装部署Kubernetes V1.10集群

通过参考Google和官网等资料
使用kubeadm进行前期的安装,本文无需科学上网
kubeadm是一个可以帮您以一种简单、合理安全和可扩展的方式引导一个最佳实践的Kubernetes集群的工具。它还支持为您管理Bootstrap Tokens和升级/降级集群

Before you begin

 1. 准备三台虚拟机:一台Master、两台node,安装操作系统CentOS Linux release 7.2,(由于要连互联网拉去相关镜像,我们用vbox设置好网络,启用NAT和host-only)。
 2. 系统配置(三节点都需执行) |可先创建一台虚机配好系统参数后再克隆两台。

系统配置(三节点都要执行)

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 修改配置永久生效,需重启
setenforce 0
#关闭防火墙 
systemctl stop firewalld && systemctl disable firewalld
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
swapoff -a

修改/etc/fstab文件,注释掉SWAP的自动挂载,使用free -m确认swap已经关闭。


Docker安装与配置(三节点)

mkdir ~/k8s
cd k8s
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
yum install ./docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
yum install ./docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
systemctl enable docker
systemctl start docker
ExecStartPost=/usr/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
ExecStart=/usr/bin/dockerd
.....

使用阿里云镜像加速器:阿里云容器hub https://dev.aliyun.com/search.html;登录之后,进入管理中心-->镜像加速器-->操作文档,根据提示进行设置即可。

坑点:此处网上大部分提示:将镜像加速器网站配置到 /etc/docker/daemon.json,但是后期我pull镜像发现,依然找的是国外的镜像,花了一番功夫发现,需要在 /etc/systemd/system/multi-user.target.wants/docker.service里修改。

`找到 ExecStart= 这一行,在这行最后添加加速器地址 --registry-mirror=<加速器地址> ` 
systemctl daemon-reload && systemctl restart docker && systemctl status docker
验证:
docker info
或
ps -ef  | grep dockerd
如果从结果中看到了配置的 --registry-mirror 参数说明配置成功。

拉镜像:

需要提前pull相关镜像并tag,否则 kubeadm init时会报错,科学上网的可忽略。
master节点:

docker pull keveon/kube-apiserver-amd64:v1.10.0
docker pull keveon/kube-scheduler-amd64:v1.10.0
docker pull keveon/kube-controller-manager-amd64:v1.10.0
docker pull keveon/kube-proxy-amd64:v1.10.0
docker pull keveon/k8s-dns-kube-dns-amd64:1.14.8
docker pull keveon/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker pull keveon/k8s-dns-sidecar-amd64:1.14.8
docker pull keveon/etcd-amd64:3.1.12
docker pull keveon/flannel:v0.10.0-amd64
docker pull keveon/pause-amd64:3.1

docker tag keveon/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0
docker tag keveon/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0
docker tag keveon/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
docker tag keveon/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
docker tag keveon/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
docker tag keveon/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker tag keveon/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
docker tag keveon/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12
docker tag keveon/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag keveon/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

或执行以下脚本:

#!/bin/bash
images=(kube-proxy-amd64:v1.10.0 kube-scheduler-amd64:v1.10.0 kube-controller-manager-amd64:v1.10.0 kube-apiserver-amd64:v1.10.0
etcd-amd64:3.1.12 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.8 k8s-dns-kube-dns-amd64:1.14.8
k8s-dns-dnsmasq-nanny-amd64:1.14.8)
for imageName in ${images[@]} ; do
  docker pull keveon/$imageName
  docker tag keveon/$imageName k8s.gcr.io/$imageName
  docker rmi keveon/$imageName
done

docker images 可查看相关脚本;

Node节点,在join节点之前也需要先下载到节点上面:

docker pull keveon/kube-proxy-amd64:v1.10.0
docker pull keveon/flannel:v0.10.0-amd64
docker pull keveon/pause-amd64:3.1
docker pull keveon/kubernetes-dashboard-amd64:v1.8.3
docker pull keveon/heapster-influxdb-amd64:v1.3.3
docker pull keveon/heapster-grafana-amd64:v4.4.3
docker pull keveon/heapster-amd64:v1.4.2

docker tag keveon/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag keveon/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
docker tag keveon/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
docker tag keveon/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
docker tag keveon/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
docker tag keveon/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3
docker tag keveon/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2

或执行以下脚本:

#!/bin/bash
images=(kube-proxy-amd64:v1.10.0 flannel:v0.10.0-amd64 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3
heapster-influxdb-amd64:v1.3.3 heapster-grafana-amd64:v4.4.3  heapster-amd64:v1.4.2)
for imageName in ${images[@]} ; do
  docker pull keveon/$imageName
  docker tag keveon/$imageName k8s.gcr.io/$imageName
  docker rmi keveon/$imageName
done

安装配置kubernetes

注:此小结三节点都要执行

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF

我们是安装最新版本的,所以直接yum install -y kubeadm即可,它会安装相应依赖包。

如果要指定版本,可以先看看有那些版本
yum list kubeadm --showduplicates
docker info 查看 --cgroup-driver
确保kubelet的配置文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf里的--cgroup-driver的一致。
我们安装的都是一致的无需修改

安装完成后:

kubelet设置开机自动运行
systemctl enable kubelet

启动kubelet:

systemctl start kubelet

此时系统日志报错可忽略,因为还没有初始化。


使用kubeadm init初始化集群

该小节仅在Master节点上执行

初始化是关键的一步,也有不少坑,有的网站给的命令不全或不成功,eg:

kubeadm init  --skip-preflight-checks --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 

加version是为了避免time out ;加pod-networ是我们后续节点间是使用flannel网络的默认配置,这两项都需要添加,如果不成功我们通过/var/log/messages查看相关信息。
坑点:通过键入上述命令,一直会卡住,主要有两点一个是我们没有提前pull镜像导致,kubeadm默认会从Google拉去镜像,这对于我们没有科学上网的人一开始可能会掉入此坑;再有一点,通过log日志发现有类似信息:

k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused

我们发现了一个问题,日志里的ip是我们访问互联网的ip地址,这明显不是正确的配置,通过官网找到原因:

Unless otherwise specified, kubeadm uses the default gateway’s network interface to advertise the master’s IP. If you want to use a different network interface, specify --apiserver-advertise-address=<ip-address> argument to kubeadm init.

由于本机配了多个网口所以需要指定相应的ip地址。

如果初始化失败,可使用清理命令:

kubeadm reset
kubeadm init --apiserver-advertise-address 192.168.56.20 --pod-network-cidr=10.244.0.0/16  --kubernetes-version=v1.10.0

执行成功信息:

初始化主要过程为:
1.kubeadm 执行初始化前的检查
2.生成 token 和证书
3.生成 KubeConfig 文件,kubelet 需要这个文件与 Master 通信。
4.安装 Master 组件,会从 goolge 的 Registry 下载组件的 Docker 镜像,这一步可能会花一些时间,主要取决于网络质量,如果本地已有相关镜像则会优先使用本地的。
5.安装附加组件 kube-proxy 和 kube-dns
6.Kubernetes Master 初始化成功
7.提示后续操作

根据安装成功后的信息,进行相关操作,记下join段的信息,由于别的节点添加至集群使用。
To make kubectl work for your non-root user, you might want to run these commands (which is also a part of the kubeadm init output):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master yum.repos.d]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
etcd-0               Healthy   {"health": "true"}   
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                  

注:该小节仅在Master节点上执行
相当于部署软件的操作

$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
$ kubectl apply -f  kube-flannel.yml
clusterrole.rbac.authorization.k8s.io "flannel" created
clusterrolebinding.rbac.authorization.k8s.io "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset.extensions "kube-flannel-ds" created

如果你的节点有多个网卡的话,需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。flanneld启动参数加上--iface=<iface-name>

 args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=enp0s8

安装完成后:
ifconfig验证是否有flannel网络
使用kubectl get pods命令可以查看到我们集群中的组件运行状态:

[root@master ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
default       curl-775f9567b5-69tlv                   1/1       Running   2          3d
default       httpd-app-77c9c8f99f-l4xct              1/1       Running   1          3d
default       httpd-app-77c9c8f99f-srr69              1/1       Running   1          3d
default       nginx-deployment-6b5c99b6fd-2b9lx       1/1       Running   1          3d
default       nginx-deployment-6b5c99b6fd-qtqxl       1/1       Running   2          3d
kube-system   etcd-master                             1/1       Running   13         4d
kube-system   heapster-676cc864c6-ggcgr               1/1       Running   1          3d
kube-system   kube-apiserver-master                   1/1       Running   1          1m
kube-system   kube-controller-manager-master          1/1       Running   7          4d
kube-system   kube-dns-86f4d74b45-swwb8               3/3       Running   6          3d
kube-system   kube-flannel-ds-9mljn                   1/1       Running   27         4d
kube-system   kube-flannel-ds-ffqxn                   1/1       Running   3          4d
kube-system   kube-proxy-9t2pw                        1/1       Running   2          4d
kube-system   kube-proxy-jsnsd                        1/1       Running   9          5d
kube-system   kube-scheduler-master                   1/1       Running   7          4d
kube-system   kubernetes-dashboard-7d5dcdb6d9-psld9   1/1       Running   1          3d
kube-system   monitoring-grafana-69df66f668-g6pzw     1/1       Running   1          3d
kube-system   monitoring-influxdb-78d4c6f5b6-d5prw    1/1       Running   1          3d

如果都是running状态则表示master节点安装成功了。(nginx和httpd是我后续安装的,可忽略)

kubectl taint nodes master  node-role.kubernetes.io/master-

Run the command that was output by kubeadm init. For example:

kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

A few seconds later, you should notice this node in the output from kubectl get nodes when run on the master.

[root@master ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    5d        v1.10.0
node1     Ready     <none>    5d        v1.10.0

同理加入另一个节点。


部署Dashboard插件

cd ~/k8s
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

本文前面已经pull了dashboard的镜像,所以只做配置即可:

由于 K8S在1.6版本以后kube-apiserver 启用了 RBAC 授权,而官方源码目录的 dashboard-controller.yaml 没有定义授权的 ServiceAccount,所以后续访问 API server 的 API 时会被拒绝

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system
kubectl create -f kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard-admin.rbac.yaml

查看分配的NodePort
kubectl get svc,pod --all-namespaces | grep dashboard

mkdir -p ~/k8s/heapster
cd ~/k8s/heapster
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
kubectl create -f ./

几个坑点:宿主机外访问,需使用火狐浏览器;还有些权限的问题看不到页面内容,需修改kube-apiserver-master的配置文件添加- --anonymous-auth=false。

参考地址:
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
https://www.centos.bz/2017/12/%E4%BD%BF%E7%94%A8kubeadm%E5%9C%A8centos-7%E4%B8%8A%E5%AE%89%E8%A3%85kubernetes-1-8/

上一篇下一篇

猜你喜欢

热点阅读