Kuberneteskuberneteskubernetes

部署k8s Cluster(kubernetes集群)

2018-06-28  本文已影响24人  Mariana_

环境Centos7

kubernetes 1.10.0

软件分享地址链接:https://pan.baidu.com/s/1vPdHpoE18ONh3pF4sebqVQ 密码:n6o5

三台主机

192.168.1.81 master

192.168.1.82 node1

192.168.1.83 node2

三个节点都操作

# 修改配置永久生效

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/configsetenforce 0

#关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

RHEL / CentOS 7上的某些用户报告了由于iptables被绕过而导致流量被错误路由的问题。应该确保net.bridge.bridge-nf-call-iptables的sysctl配置中被设置为1

cat <  /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system

关闭swap分区

swapoff -a

$sed -i 's/.*swap.*/#&/' /etc/fstab

wget为用笔记本翻墙下载的软件

mkdir ~/k8s

cd k8s

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm

yum install ./docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm

yum install ./docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm

systemctl enable docker

systemctl start docker

配置Docker

开启iptables filter表的FORWARD链

编辑/lib/systemd/system/docker.service,在ExecStart=..上面加入如下内容:

ExecStartPost=/usr/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT

ExecStart=/usr/bin/dockerd

配置国内镜像加速器。(如果可以科学上网此步可忽略)因为kubeadm默认要从google的镜像仓库下载镜像,但目前国内无法访问google镜像仓库,所以需要我们配置成国内的镜像仓库,并在kubeadm init前pull下所需的镜像。

使用阿里云镜像加速器:阿里云容器hub https://dev.aliyun.com/search.html;登录之后,进入管理中心-->镜像加速器-->操作文档,根据提示进行设置即可。

以下为阿里云地址和操作步骤

sudo mkdir -p /etc/docker

sudo tee /etc/docker/daemon.json <<-'EOF'

{

  "registry-mirrors": ["https://2mrc6wis.mirror.aliyuncs.com"]

}

EOF

sudo systemctl daemon-reload

sudo systemctl restart docker

验证:

docker info

ps -ef  | grep dockerd

如果从结果中看到了配置的 --registry-mirror 参数说明配置成功。

master节点操作

拉镜像:

需要提前pull相关镜像并tag,否则 kubeadm init时会报错,科学上网的可忽略。

master节点:

docker pull keveon/kube-apiserver-amd64:v1.10.0

docker pull keveon/kube-scheduler-amd64:v1.10.0

docker pull keveon/kube-controller-manager-amd64:v1.10.0

docker pull keveon/kube-proxy-amd64:v1.10.0

docker pull keveon/k8s-dns-kube-dns-amd64:1.14.8

docker pull keveon/k8s-dns-dnsmasq-nanny-amd64:1.14.8

docker pull keveon/k8s-dns-sidecar-amd64:1.14.8

docker pull keveon/etcd-amd64:3.1.12

docker pull keveon/flannel:v0.10.0-amd64

docker pull keveon/pause-amd64:3.1

打标机

docker tag keveon/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0

docker tag keveon/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0

docker tag keveon/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0

docker tag keveon/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0

docker tag keveon/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8

docker tag keveon/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8

docker tag keveon/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8

docker tag keveon/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12

docker tag keveon/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

docker tag keveon/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

或执行以下脚本

#!/bin/bash

images=(kube-proxy-amd64:v1.10.0 kube-scheduler-amd64:v1.10.0 kube-controller-manager-amd64:v1.10.0 kube-apiserver-amd64:v1.10.0

etcd-amd64:3.1.12 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.8 k8s-dns-kube-dns-amd64:1.14.8

k8s-dns-dnsmasq-nanny-amd64:1.14.8)

for imageName in ${images[@]} ; do

  docker pull keveon/$imageName

  docker tag keveon/$imageName k8s.gcr.io/$imageName

  docker rmi keveon/$imageName

done

node1和node2节点操作

docker pull keveon/kube-proxy-amd64:v1.10.0

docker pull keveon/flannel:v0.10.0-amd64

docker pull keveon/pause-amd64:3.1

docker pull keveon/kubernetes-dashboard-amd64:v1.8.3

docker pull keveon/heapster-influxdb-amd64:v1.3.3

docker pull keveon/heapster-grafana-amd64:v4.4.3

docker pull keveon/heapster-amd64:v1.4.2

docker tag keveon/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

docker tag keveon/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

docker tag keveon/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0

docker tag keveon/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

docker tag keveon/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3

docker tag keveon/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3

docker tag keveon/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2

#!/bin/bash

images=(kube-proxy-amd64:v1.10.0 flannel:v0.10.0-amd64 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3

heapster-influxdb-amd64:v1.3.3 heapster-grafana-amd64:v4.4.3  heapster-amd64:v1.4.2)

for imageName in ${images[@]} ; do

  docker pull keveon/$imageName

  docker tag keveon/$imageName k8s.gcr.io/$imageName

  docker rmi keveon/$imageName

done

此小结三节点都要执行

安装配置kubernetes

配置yum源并安装

cat > /etc/yum.repos.d/kubernetes.repo <

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

EOF

我安装的是1.10.0版本,按最新版本在初始化过不去。.

yum install -y kubelet-1.10.0-0  kubectl-1.10.0-0 kubeadm-1.10.0-0

##确保kubelets使用的cgroup-driver和docker使用的cgroup-driver一样:

sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

##启动kubelet服务:systemctl daemon-reloadsystemctl start kubeletsystemctl enable kubelet

最关键一步

只在master上执行

kubeadm init --apiserver-advertise-address 192.168.1.81 --pod-network-cidr=10.244.0.0/16  --kubernetes-version=v1.10.0

如果初始化失败 执行以下清理命令 日志在/var/log/messages

kubeadm reset

初始化完成为以下结果注意保留最后一行

设置权限

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

部署软件操作

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f  kube-flannel.yml

clusterrole.rbac.authorization.k8s.io "flannel" created

clusterrolebinding.rbac.authorization.k8s.io "flannel" created

serviceaccount "flannel" created

configmap "kube-flannel-cfg" created

daemonset.extensions "kube-flannel-ds" created

如果你的节点有多个网卡的话,需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。flanneld启动参数加上--iface=

安装完成后:

ifconfig验证是否有flannel网络

使用kubectl get pods命令可以查看到我们集群中的组件运行状态:

kubectl get pod --all-namespaces

所有pods状态都得为running状态否则就是有错误

[root@master k8s]# kubectl get pod --all-namespaces

NAMESPACE    NAME                                    READY    STATUS    RESTARTS  AGE

kube-system  etcd-master                            1/1      Running  0          2h

kube-system  heapster-69b5d4974d-h25kj              1/1      Running  0          1h

kube-system  kube-apiserver-master                  1/1      Running  0          2h

kube-system  kube-controller-manager-master          1/1      Running  0          2h

kube-system  kube-dns-86f4d74b45-8txbg              3/3      Running  0          2h

kube-system  kube-flannel-ds-8g842                  1/1      Running  0          1h

kube-system  kube-flannel-ds-hsz5z                  1/1      Running  0          2h

kube-system  kube-flannel-ds-n688l                  1/1      Running  2          1h

kube-system  kube-proxy-8tcpv                        1/1      Running  0          2h

kube-system  kube-proxy-mw2sl                        1/1      Running  0          1h

kube-system  kube-proxy-vv5c8                        1/1      Running  0          1h

kube-system  kube-scheduler-master                  1/1      Running  0          2h

kube-system  kubernetes-dashboard-7d5dcdb6d9-2b484  1/1      Running  0          32m

kube-system  monitoring-grafana-69df66f668-m7zjh    1/1      Running  0          1h

kube-system  monitoring-influxdb-78d4c6f5b6-dmwhc    1/1      Running  0          1h

查看详细信息 错误一般都为找不到节点的镜像

kubectl describe pod heapster-69b5d4974d-h25kj --namespace=kube-system

Master节点参与工作负载

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,可使用如下命令使Master节点参与工作负载。

kubectl taint nodes master  node-role.kubernetes.io/master-

node1和node2 执行这个操作命令token为master初始化的token ip地址和端口都写master

kubeadm join --token 5l8v7s.sbhjkzx5xx372fc2 192.168.1.81:6443 --discovery-token-ca-cert-hash sha256:37c594834cb67e554e170a1f677fcf817ed372dec3c4de2b3171bb6df21f32a9

添加完到主节点查看

部署Dashboard插件

cd ~/k8s

wgethttps://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

本文前面已经pull了dashboard的镜像,所以只做配置即可:

指定端口类型为 NodePort,这样外界可以通过地址 nodeIP:nodePort 访问 dashboard;

创建dashboar资源

kubectl create -f kubernetes-dashboard.yaml

2)、## 创建一个基于RBAC认证的角色绑定资源:

$ vim Dashboard-ServiceAccount-ClusterRoleBind.yaml

添加如下内容:

apiVersion: v1

kind: ServiceAccount

metadata:

  name: admin-user

  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRoleBinding

metadata:

  name: admin-user

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

subjects:

- kind: ServiceAccount

  name: admin-user

  namespace: kube-system

创建该资源

kubectl create -f Dashboard-ServiceAccount-ClusterRoleBind.yaml

查看token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

查看访问端口

kubectl get svc,pod --all-namespaces | grep dashboard

此时访问 https://192.168.1.81:32145就可以了 用火狐浏览器 用admin token

部署heapster插件

安装Heapster为集群添加使用统计和监控功能,为Dashboard添加仪表盘。

mkdir -p ~/k8s/heapster cd ~/k8s/heapster wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml kubectl create -f ./

用以下方式查看token查看也可以

[root@master k8s]# kubectl -n kube-system get secret | grep admin

[root@master k8s]# kubectl describe -n kube-system secret/admin-user-token-m4nvz

访问Dashboard建议用火狐浏览器

上一篇下一篇

猜你喜欢

热点阅读