容器及容器编排
安装docker
# apt-get install apt-transport-https ca-certificates curl software-properties-common
# curl -fsSL https://download.docker.com/linux/ubuntu/gpg| sudo apt-key add -
# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# apt-get update && apt-getinstall -y docker-ce
container分解
containerK8S分解(部分)
K8S以下部分文件的下载需要额外的科学网络配置,官方文件下载完毕后可push本地的docker registry,再次部署时只需要pull下来在docker tag成XML定义的大包名称即可。
安装K8S组件
# systemctl disable firewalld.service
# systemctl stop firewalld.service
# apt-get update && apt-getinstall -y apt-transport-https
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
# apt-get update
# apt-get install -y kubelet kubeadm kubectl
主控节点
kubeadm init --apiserver-advertise-address <host IP address, 10.109.181.110 e.g.> --pod-network-cidr=10.244.0.0/16
执行过程按照输出做如下配置:
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
使能K8S命令行的自动补齐
# echo "source <(kubectl completion bash)" >> ~/.bashrc
配置flannel网络:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
slave节点
在主控节点上查看输出信息 'kubeadm token create --print-join-command'
# kubeadm join 10.109.181.110:6443--token ztwxpd.qbp9iaiqsd8v97gg --discovery-token-ca-cert-hash
sha256:79ac20fc3f33ab41e23701923f246f997977a70ff3cb40ab10431aee4bf098b3
节点发现完毕 查看基本服务状态安装dashboard
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
将type: ClusterIP中的ClusterIP改为NodePort
# kubectl --namespace=kube-system edit service kubernetes-dashboard
# kubectl --namespace=kube-system get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.111.96.162 443:32588/TCP 2d
如果是测试环境需要跳过kubeconfig或者Token的方式登录,按照以下方法操作后登录dashboard然后skip
skip admin用火狐浏览器登录<host IP>:32588访问dashboard,登录界面会提示安全策略警告,点击advance然后skip。用chrome浏览会出现错误且无法绕过,暂时没有规避办法。
部署EFK
EFK实际上是elastic search,fluentd和kibana三个服务,用来搜集,监控容器实例的log并提供可视化界面进行更为灵活的管理。这几个模块可以自由组合,比如ELK,用logstash来替代fluentd进行log的搜集。
# wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml 注意需要注释掉configuration start with 'NodeSelector'
# kubectl create -f .
查看服务状态
# kubectl cluster-info
Kubernetes master is running athttps://<host IP address>:6443
Elasticsearch is running athttps://<host IP address>:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running athttps://<host IP address>:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
KubeDNS is running athttps://<host IP address>:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
设置代理同时将端口映射到8888(可以自己定义)
#kubectl proxy --address='0.0.0.0'--port=8888--accept-hosts='^*$' &
进入kibana控制面板 http://<host IP address>:8888/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana#/management/kibana/index?_g=() 进行进一步配置
主要是index pattern等配置部署gluster
所有节点:
# apt-get installsoftware-properties-common
#add-apt-repository ppa:gluster/glusterfs-3.8
# apt-get update && apt-getinstall glusterfs-server
# mkdir /opt/glusterd
# mkdir /opt/gfs_data
# sed -i 's/var\/lib/opt/g' /etc/glusterfs/glusterd.vol
# systemctl status glusterfs-server.service
●glusterfs-server.service - LSB: GlusterFS server
Loaded: loaded (/etc/init.d/glusterfs-server; bad; vendor preset:enabled)
Active: active (running) since Thu 2018-06-07 07:31:51 UTC; 31min ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/glusterfs-server.service
└─19538/usr/sbin/glusterd -p /var/run/glusterd.pid
Jun 07 07:31:49 k8s-cluster-1systemd[1]: Starting LSB: GlusterFS server...
Jun 07 07:31:49 k8s-cluster-1glusterfs-server[19528]: * Startingglusterd service glusterd
Jun 07 07:31:51 k8s-cluster-1glusterfs-server[19528]: ...done.
Jun 07 07:31:51 k8s-cluster-1 systemd[1]:
Started LSB: GlusterFS server
主控节点:
确保所有节点可以解析
root@k8s-cluster-1:~/gluster# cat/etc/hosts
…
10.109.181.110 k8s-cluster-1
10.109.181.117 k8s-cluster-2
10.109.181.119 k8s-cluster-3
root@k8s-cluster-1:~/gluster#gluster peer probe k8s-cluster-2
peer probe: success.
root@k8s-cluster-1:~/gluster#gluster peer probe k8s-cluster-3
peer probe: success.
root@k8s-cluster-1:~/gluster#gluster peer status
Number of Peers: 2
Hostname: k8s-cluster-2
Uuid:d10af069-09f6-4d86-8120-dde1afa4393b
State: Peer in Cluster (Connected)
Hostname: k8s-cluster-3
Uuid:c6d4f3eb-78c5-4b10-927e-f1c6e41330d5
State: Peer in Cluster (Connected)
创建对应的endpoint
配置如图root@k8s-cluster-1:~/gluster#kubectl apply -f glusterfs-endpoints.json
endpoints "glusterfs-cluster" created
root@k8s-cluster-1:~/gluster#kubectl get ep
NAME ENDPOINTS AGE
glusterfs-cluster 10.109.181.110:1207,10.109.181.117:1207,10.109.181.119:1207 5s
influxdb 16d
kubernetes 10.109.181.110:6443 27d
创建对应服务
配置如图root@k8s-cluster-1:~/gluster#kubectl apply -f glusterfs-service.json
service"glusterfs-cluster" created
root@k8s-cluster-1:~/gluster# kubectlget svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
glusterfs-cluster ClusterIP 10.97.199.53 1207/TCP 6s
influxdb LoadBalancer 10.109.218.156 8086:31240/TCP 16d
kubernetes ClusterIP 10.96.0.1 443/TCP 27d
创建卷及参数微调
# gluster volume create k8s-volume transport tcp k8s-cluster-2:/opt/gfs_data k8s-cluster-3:/opt/gfs_data force
# gluster volume quota k8s-volume enable
# gluster volume quota k8s-volume limit-usage / 1TB
# gluster volume set k8s-volume performance.cache-size 4GB
# gluster volume set k8s-volume performance.io-thread-count 16
# gluster volume set k8s-volume network.ping-timeout 10
# gluster volume set k8s-volume performance.write-behind-window-size 1024MB
基本测试
修改该json中对应的内容 "path": "k8-volume"
# kubectl apply -f glusterfs-pod.json
登录到该pod中,用df -h检查是否分配并挂载了对应的volume
Heketi服务
简单的说heketi提供了一个上层的restful接口以及简单的命令行来实现更加灵活的分布式存储管理。
# wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz
# tar -xvf heketi-client-v7.0.0.linux.amd64.tar.gz
# cp heketi-client/bin /heketi-cli/bin/
# git clone https://github.com/gluster/gluster-kubernetes && cd ./gluster-kubernetes/deploy
/*Create a separate namespace*/
# kubectl create namespace gluster
安装脚本执行前有一些前提条件需要满足,比如必须要加载的内核模块(更多前提查看脚本提示https://github.com/gluster/gluster-kubernetes/blob/master/deploy/gk-deploy)
# modprobe dm_snapshot dm_mirror dm_thin_pool
修改对应的daemonset文件保证该模块被映射进pod中
root@k-1:/home/ubuntu/gluster-kubernetes/deploy#vim kube-templates/glusterfs-daemonset.yaml
- name: kernel-modules
hostPath:
path: "/lib/modules" ---- > change from /var/lib/modules
每个节点都需要mount.glusterfs command is available. 部分的红帽系统这个命令包含在glusterfs-fuse中
# add-apt-repositoryppa:gluster/glusterfs-3.12
# apt-get update
# apt-get install -yglusterfs-client
执行安装脚本
# ./gk-deploy -g -n gluster /*-g option,it will deploy a GlusterFS DaemonSet onto your Kubernetes cluster by treatingthe nodes listed in the topology file as hyper-converged nodes with both Kubernetes and storage devices on them.*/
删除之前创建的vg
# vgremove -ff $(sudo vgdisplay | grep -i "VG Name" | awk '{print $3}')
主节点和存储节点
这里我们有三个存储节点分别是k3,k-pv1,k-pv2
# add-apt-repository ppa:gluster/glusterfs-3.12 && apt-get update && apt-get install -y glusterfs-client
具体参见https://www.jianshu.com/p/2c6a0eacfe4a
root@k-1:/home/ubuntu/gluster-kubernetes/deploy#export HEKETI_CLI_SERVER=$(kubectl get svc/deploy-heketi -n gluster --template 'http://{{.spec.clusterIP}}:{{(index.spec.ports 0).port}}')
root@k-1:/home/ubuntu/gluster-kubernetes/deploy#echo $HEKETI_CLI_SERVER
root@k-1:/home/ubuntu/gluster-kubernetes/deploy#curl $HEKETI_CLI_SERVER/hello
Hello from Heketi
root@k-1:/home/ubuntu/gluster-kubernetes/deploy#heketi-cli -s $HEKETI_CLI_SERVER cluster list
Clusters:
Id:035b137fbe2c02021cc7c381710ed0c4[block]
root@k-1:/home/ubuntu/gluster-kubernetes/deploy#heketi-cli -s $HEKETI_CLI_SERVER topology info
Cluster Id:a17b06b860a5c731725ae435d03ed750
File: true
Block: true
Volumes:
Nodes:
Node Id:13206c89322302eee45a7d3d5a0b2175
State: online
Cluster Id:a17b06b860a5c731725ae435d03ed750
Zone: 1
Management Hostnames: k-3
Storage Hostnames: 10.109.181.131
Devices:
Id:a5987c9a076eac86378825a552ce8b16 Name:/dev/vdb State:online Size (GiB):49 Used (GiB):0 Free (GiB):49
Bricks:
Node Id:952e7876c36b3177a6f30b91f328f752
State: online
Cluster Id:a17b06b860a5c731725ae435d03ed750
Zone: 1
Management Hostnames: k-pv2
Storage Hostnames: 10.109.181.134
Devices:
Id:56bc8b325b258cade583905f2d6cba0e Name:/dev/vdb State:online Size (GiB):99 Used (GiB):0 Free (GiB):99
Bricks:
Node Id:a28dbd80cd95122a4cd834146b7939ce
State: online
Cluster Id:a17b06b860a5c731725ae435d03ed750
Zone: 1
Management Hostnames: k-pv1
Storage Hostnames: 10.109.181.152
Devices:
Id:58a6e5a003c6aa1d2ccc4acec67cbd5c Name:/dev/vdb State:online Size (GiB):99 Used (GiB):0 Free (GiB):99
Bricks:
创建相应的pv和pvc,以及测试用pod具体文件参考:https://github.com/gluster/gluster-kubernetes/blob/master/docs/examples/hello_world/README.md
HELM
helm是K8S的一个包管理工具,用户可以来编辑应用而不用关心底层的pod,service,endpoint等关系,是application focus的一个利器。
官方解释:Helm helps you manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application.
安装步骤如下:
# curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get> get_helm.sh
# chmod 700 get_helm.sh
# ./get_helm.sh
# helm version
Client:&version.Version{SemVer:"v2.9.1",GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710",GitTreeState:"clean"}
安装tiller
# helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.9.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
# helm init --upgrade
$HELM_HOME has been configured at /Users/test/.helm.
Tiller (the helm server side component)has been installed into your Kubernetes Cluster.
Happy Helming!
# helm version
Client:&version.Version{SemVer:"v2.9.1",GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710",GitTreeState:"clean"}
Server:&version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710",GitTreeState:"clean"}
# kubectl create serviceaccount --namespace kube-system tiller
# kubectl create clusterrolebinding tiller-cluster-rule--clusterrole=cluster-admin --serviceaccount=kube-system:tiller
# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
安装应用wordpress,测试
# helm install --name wordpress-helm --set "persistence.enabled=false,mariadb.persistence.enabled=false" stable/wordpress
NAME: wordpress-helm
LAST DEPLOYED: Thu Jun 28 09:03:362018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wordpress-helm-mariadb ClusterIP 10.103.74.128 3306/TCP 1s
wordpress-helm-wordpress LoadBalancer 10.108.70.1 80:32211/TCP,443:32191/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
wordpress-helm-wordpress 1 1 1 0 1s
==> v1beta1/StatefulSet
NAME DESIRED CURRENT AGE
wordpress-helm-mariadb 1 1 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
wordpress-helm-wordpress-8f698f574-xbbhj 0/1 ContainerCreating 0 0s
wordpress-helm-mariadb-0 0/1 Pending 0 0s
==> v1/Secret
NAME TYPE DATA AGE
wordpress-helm-mariadb Opaque 2 1s
wordpress-helm-wordpress Opaque 2 1s
==> v1/ConfigMap
NAME DATA AGE
wordpress-helm-mariadb 1 1s
wordpress-helm-mariadb-tests 1 1s
NOTES:
1. 获取URL
NOTE: It may take a few minutes for the LoadBalancer IP to be available. Watch the status with: 'kubectl get svc--namespace default -w wordpress-helm-wordpress'
export SERVICE_IP=$(kubectl get svc --namespace default wordpress-helm-wordpress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
2. 获取鉴权去登录blog
echo Username: user
echo Password: $(kubectl get secret --namespace default wordpress-helm-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)
# helm ls
NAME REVISION UPDATED STATUS CHART NAMESPACE
wordpress-helm 1 Thu Jun 28 09:03:36 2018 DEPLOYED wordpress-2.0.0 default
附录
K8S API
Execute command after PODinstantiated
https://kubernetes.io/cn/docs/tasks/inject-data-application/define-command-argument-container/
Capability
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
https://github.com/torvalds/linux/blob/master/include/uapi/linux/capability.h
AppArmor
https://kubernetes.io/docs/tutorials/clusters/apparmor/
Networking
https://kubernetes.io/docs/concepts/cluster-administration/networking/
Kompose
https://k8smeetup.github.io/docs/tools/kompose/user-guide/
Cheat sheet