ubuntu 搭建Kubenetes1.9.2 集群
2018-01-28 本文已影响1309人
俊逸之光
Kubernetes部属说明
环境准备
本文档主要记录在内容物理服务器上构建Kubernetes集群服务
主机配置:
OS: ubuntu16.04LTS X64
Docker: 17.11.0-ce+
Kubernetes: 1.9.2
IP: 已指定静态IP且各主机间可通迅
网络规划:
|--主机名称--|------IP------|---作用---|
| crmsvr-77 | 10.26.24.77 | Master |
|-----------|--------------|---------|
| crmsvr-76 | 10.26.24.76 | Node |
|-----------|--------------|---------|
| crmsvr-75 | 10.26.24.75 | Node |
|-----------|--------------|---------|
| crmsvr-74 | 10.26.24.74 | Node |
|-----------|--------------|---------|
所需镜像:
|--Image------------------------------------------------|---Tag----|
| gcr.io/google_containers/pause-amd64 | 3.0 |
|-------------------------------------------------------|----------|
| gcr.io/google_containers/etcd-amd64 | 3.11 |
|-------------------------------------------------------|----------|
| gcr.io/google_containers/kube-apiserver-amd64 | v1.9.2 |
|-------------------------------------------------------|----------|
| gcr.io/google_containers/kube-controller-manager-amd64| v1.9.2 |
|-------------------------------------------------------|----------|
| gcr.io/google_containers/kube-proxy-amd64 | V1.9.2 |
|-------------------------------------------------------|----------|
| k8s.gcr.io/kubernetes-dashboard-amd64 | v1.8.2 |
|-------------------------------------------------------|----------|
| gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 | 1.14.8 |
|-------------------------------------------------------|----------|
| gcr.io/google_containers/k8s-dns-kube-dns-amd64 | 1.14.8 |
|-------------------------------------------------------|----------|
| gcr.io/google_containers/k8s-dns-sidecar-amd64 | 1.14.8 |
|-------------------------------------------------------|----------|
资料准备
镜像准备:
由于国内网络问题,Kubernetes所需镜像无法在国内下载,利用DockerCloud通过GitHub构建后即可在国内下载
构建方式参考:http://blog.csdn.net/zgkpy/article/details/79181326
软件准备:
1. Kubernetes:
下载地址:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md
版本:
V1.9.2
Binaries:
1)Client Binaries:
kubernetes-client-darwin-amd64.tar.gz(MacOsx)
kubernetes-client-linux-amd64.tar.gz(Linux)
kubernetes-client-windows-amd64.tar.gz(Windows)
2)Server Binaries:
kubernetes-server-linux-amd64.tar.gz
3)Node Binaries:
kubernetes-node-linux-amd64.tar.gz
2. Docker安装
1)最新版安装:
wget -qO- https://get.docker.com/ | sh
2)指定版本:
在线安装:
https://docs.docker-cn.com/engine/installation/linux/docker-ce/ubuntu/#install-from-a-package
离线安装:
地址:https://download.docker.com/linux/
选择:
ubuntu/dists/xenial/pool/pool/stable/amd64下对应的版本
其它Linux系统按实际需要下载
操作:
dudo dpkg -i Docke包文件名称
$ dudo dpkg -i docker-ce_17.12.0~ce-0~ubuntu_amd64.deb
3. 重设Docker网桥(如有必要)
1) 安装brctl
$ sudo apt install bridge-utils
2) 停止Docker服务
$ sudo service docker stop
3) 依次执行
$ sudo brctl addbr bridge0
$ sudo ip addr add 10.26.34.0/24 dev bridge0
$ sudo ip link set dev bridge0 up
#10.26.34.0/24 为自定义网段
# 验证网桥
$ sudo ip addr show bridge0
4)创建文件
$ sudo touch /etc/docker/daemon.json
编辑内容为:
{
"bridge": "bridge0"
}
5)重启Docker服务
$ sudo service docker start
6) 删除默认网桥
$ sudo ip link set dev docker0 down
$ sudo brctl delbr docker0
$ sudo iptables -t nat -F POSTROUTING
7) 下载镜像,并修改Tag
#!bin/bash
# 下载镜像
#pause
sudo docker pull zhguokai/pause-amd64:3.0
sudo docker tag zhguokai/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
#etcd
sudo docker pull zhguokai/etcd-amd64:3.1.11
sudo docker tag zhguokai/etcd-amd64:3.1.11 gcr.io/google_containers/etcd-amd64:3.1.11
#apiserver
sudo docker pull zhguokai/kube-apiserver-amd64:v1.9.2
sudo docker tag zhguokai/kube-apiserver-amd64:v1.9.2 gcr.io/google_containers/kube-apiserver-amd64:v1.9.2
#controller
sudo docker pull zhguokai/kube-controller-manager-amd64:v1.9.2
sudo docker tag zhguokai/kube-controller-manager-amd64:v1.9.2 gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2
#scheduler
sudo docker pull zhguokai/kube-scheduler-amd64:v1.9.2
sudo docker tag zhguokai/kube-scheduler-amd64:v1.9.2 gcr.io/google_containers/kube-scheduler-amd64:v1.9.2
#proxy
sudo docker pull zhguokai/kube-proxy-amd64:v1.9.2
sudo docker tag zhguokai/kube-proxy-amd64:v1.9.2 gcr.io/google_containers/kube-proxy-amd64:v1.9.2
# app
#dashboard
sudo docker pull zhguokai/kubernetes-dashboard-amd64:v1.8.2
sudo docker tag zhguokai/kubernetes-dashboard-amd64:v1.8.2 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.2
sudo docker pull zhguokai/k8s-dns-dnsmasq-nanny-amd64:1.14.8
sudo docker tag zhguokai/k8s-dns-dnsmasq-nanny-amd64:1.14.8 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8
sudo docker pull zhguokai/k8s-dns-kube-dns-amd64:1.14.8
sudo docker tag zhguokai/k8s-dns-kube-dns-amd64:1.14.8 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8
sudo docker pull zhguokai/k8s-dns-sidecar-amd64:1.14.8
sudo docker tag zhguokai/k8s-dns-sidecar-amd64:1.14.8 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8
开始部属
本次部属单Master+N-Node方式,其中etcd组件为集群部属
Master节点同时也作为Node节点参与资源调度
除Kubelet需要部属在容器外之外,其余组件部属在容器内
-
编写yaml文件,用于部属Kubenetes基础容器
-
etcd.yaml
apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: etcd tier: control-plane name: etcd namespace: kube-system spec: containers: - command: - etcd - --config-file=/etc/etcd/etcd-config.yaml image: gcr.io/google_containers/etcd-amd64:3.1.11 ports: - containerPort: 12379 hostPort: 12379 name: etcd-port - containerPort: 12380 hostPort: 12380 name: etcd-cl-port livenessProbe: failureThreshold: 8 httpGet: host: 10.26.24.77 path: /health port: 12379 scheme: HTTP initialDelaySeconds: 15 timeoutSeconds: 15 name: etcd resources: {} volumeMounts: - mountPath: /var/lib/etcd name: etcd-data - mountPath: /etc/etcd/etcd-config.yaml name: etcd-conf hostNetwork: true volumes: - hostPath: path: /k8s/app/etcd/data type: DirectoryOrCreate name: etcd-data - hostPath: path: /k8s/conf/etcd-config.yaml type: FileOrCreate name: etcd-conf status: {}
-
kube-apiserver.yaml
apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --allow-privileged=true - --etcd-servers=http://10.26.24.77:12379 - --secure-port=0 - --kubelet-https=false - --insecure-bind-address=0.0.0.0 - --enable-swagger-ui=true - --insecure-port=18080 - --port=18080 image: gcr.io/google_containers/kube-apiserver-amd64:v1.9.2 ports: - containerPort: 18080 hostPort: 18080 livenessProbe: failureThreshold: 8 httpGet: host: 10.26.24.77 path: /healthz port: 18080 scheme: HTTP initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-apiserver resources: requests: cpu: 250m hostNetwork: true status: {}
-
kube-controller-manager.yaml
apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-controller-manager tier: control-plane name: kube-controller-manager namespace: kube-system spec: containers: - command: - kube-controller-manager - --leader-elect=true - --controllers=*,bootstrapsigner,tokencleaner - --master=http://10.26.24.77:18080 image: gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2 ports: - containerPort: 10252 hostPort: 10252 livenessProbe: failureThreshold: 8 httpGet: host: 10.26.24.77 path: /healthz port: 10252 scheme: HTTP initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-controller-manager resources: requests: cpu: 200m volumeMounts: - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec name: flexvolume-dir hostNetwork: true volumes: - hostPath: path: /k8s/app/kube-controller/kubelet-plugins/volume/exec type: DirectoryOrCreate name: flexvolume-dir status: {}
-
kube-scheduler.yaml
apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-scheduler tier: control-plane name: kube-scheduler namespace: kube-system spec: containers: - command: - kube-scheduler - --address=0.0.0.0 - --leader-elect=true - --master=http://10.26.24.77:18080 image: gcr.io/google_containers/kube-scheduler-amd64:v1.9.2 ports: - containerPort: 10251 hostPort: 10251 livenessProbe: failureThreshold: 8 httpGet: host: 10.26.24.77 path: /healthz port: 10251 scheme: HTTP initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-scheduler resources: requests: cpu: 100m volumeMounts: - mountPath: /etc/kubernetes/scheduler.conf name: kubeconfig readOnly: true hostNetwork: true volumes: - hostPath: path: /etc/kubernetes/scheduler.conf type: FileOrCreate name: kubeconfig status: {}
-
-
编写配置文件
-
etcd-config.yaml
etcd: # This is the configuration file for the etcd server. # Human-readable name for this member. name: 'rh-etcd-node1' # Path to the data directory. data-dir: /var/lib/etcd # Path to the dedicated wal directory. #wal-dir: # Number of committed transactions to trigger a snapshot to disk. snapshot-count: 10000 # Time (in milliseconds) of a heartbeat interval. heartbeat-interval: 100 # Time (in milliseconds) for an election to timeout. election-timeout: 1000 # Raise alarms when backend size exceeds the given quota. 0 means use the # default quota. quota-backend-bytes: 0 # List of comma separated URLs to listen on for peer traffic. listen-peer-urls: http://10.26.24.76:12380 # List of comma separated URLs to listen on for client traffic. listen-client-urls: http://10.26.24.76:12379 # Maximum number of snapshot files to retain (0 is unlimited). max-snapshots: 5 # Maximum number of wal files to retain (0 is unlimited). max-wals: 5 # Comma-separated white list of origins for CORS (cross-origin resource sharing). #cors: # List of this member's peer URLs to advertise to the rest of the cluster. # The URLs needed to be a comma-separated list. initial-advertise-peer-urls: http://10.26.24.76:12380 # List of this member's client URLs to advertise to the public. # The URLs needed to be a comma-separated list. advertise-client-urls: http://10.26.24.76:12379 # Discovery URL used to bootstrap the cluster. #discovery: # Valid values include 'exit', 'proxy' #discovery-fallback: 'proxy' # HTTP proxy to use for traffic to discovery service. #discovery-proxy: # DNS domain used to bootstrap initial cluster. #discovery-srv: # Initial cluster configuration for bootstrapping. initial-cluster: rh-etcd-m=http://10.26.24.77:12380,rh-etcd-node1=http://10.26.24.76:12380,rh-etcd-node2=http://10.26.24.75:12380 # Initial cluster token for the etcd cluster during bootstrap. initial-cluster-token: 'etcd-cluster' # Initial cluster state ('new' or 'existing'). initial-cluster-state: 'new' # Reject reconfiguration requests that would cause quorum loss. strict-reconfig-check: false # Accept etcd V2 client requests enable-v2: true # Enable runtime profiling data via HTTP server enable-pprof: true # Valid values include 'on', 'readonly', 'off' #proxy: 'off' # Time (in milliseconds) an endpoint will be held in a failed state. #proxy-failure-wait: 5000 # Time (in milliseconds) of the endpoints refresh interval. #proxy-refresh-interval: 30000 # Time (in milliseconds) for a dial to timeout. #proxy-dial-timeout: 1000 # Time (in milliseconds) for a write to timeout. #proxy-write-timeout: 5000 # Time (in milliseconds) for a read to timeout. #proxy-read-timeout: 0 #client-transport-security: # DEPRECATED: Path to the client server TLS CA file. # ca-file: # Path to the client server TLS cert file. # cert-file: # Path to the client server TLS key file. # key-file: # Enable client cert authentication. # client-cert-auth: false # Path to the client server TLS trusted CA cert file. # trusted-ca-file: # Client TLS using generated certificates # auto-tls: false #peer-transport-security: # DEPRECATED: Path to the peer server TLS CA file. # ca-file: # Path to the peer server TLS cert file. # cert-file: # Path to the peer server TLS key file. # key-file: # Enable peer client cert authentication. # peer-client-cert-auth: false # Path to the peer server TLS trusted CA cert file. # trusted-ca-file: # Peer TLS using generated certificates. # auto-tls: false # Enable debug-level logging for etcd. #debug: false # Specify a particular log level for each etcd package (eg: 'etcdmain=CRITICAL,etcdserver=DEBUG'. #log-package-levels: # Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd. #log-output: default # Force to create a new one member cluster. force-new-cluster: false
-
kubelet-config.yaml
apiVersion: v1 kind: Config users: - name: kubelet clusters: - name: kubernetes cluster: server: http://10.26.24.77:18080 contexts: - context: cluster: kubernetes user: kubelet name: service-account-context current-context: service-account-context
-
kubelet.service
[Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] ExecStart=/usr/bin/kubelet \ --kubeconfig=/k8s/conf/kubelet-config.yaml \ --fail-swap-on=false \ --pod-manifest-path=/k8s/manifest \ --allow-privileged=true --runtime-cgroups=/systemd/system.slice \ --kubelet-cgroups=/systemd/system.slice \ --v=4 Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
-
-
规划节点
1. 连接各服务器,依次建立以下文件夹 /k8s/conf /k8s/app /k8s/logs /k8s/manifest 2. 选中77为Master节点 依次上传以下文件到/k8s/manifest etcd.yaml 、 kube-apiserver.yaml、 kube-controller-manger.yaml、 kube-scheduler.yaml 依次上传以下文件到/k8s/conf: etcd-config.yaml kubelete-config.yaml 上传 kubelet.service 到/lib/systemd/sytem 上传 kubelet 二进制文件到 /usr/bin/ 执行 sudo chmod a+x /usr/bin/kubelet 3. 其余节点为Node节点 依次上传以下文件到/k8s/manifest etcd.yaml 依次上传以下文件到/k8s/conf: 依次上传以下文件到/k8s/conf: etcd-config.yaml kubelete-config.yaml 上传 kubelet.service 到/lib/systemd/sytem 上传 kubelet 二进制文件到 /usr/bin/ 执行 sudo chmod a+x /usr/bin/kubelet 4. 修改各配置文件中的IP地址、端口、端口为实际的IP地址、端口 5. 在各节点执行: $ sudo systemctl daemon-relaod $ sudo systemctl enable kubelet $ sudo systemctl start kubelet 6.等待各节点启动
-
验证节点
浏览器或CURL: etcd: URL: http://10.26.24.77:12379/v2/keys 返回: {"action":"get","node":{"dir":true,"nodes":[{"key":"/mykey","value":"this is test","modifiedIndex":11,"createdIndex":11}]}} apiserver: URL: http://10.26.24.77:18080/ 返回: { "paths": [ "/api", "/api/v1", "/apis", "/apis/", ...
-
客户端连接
kube-apiserver 验证通过之后 客户端使用: $ kubectl -s http://API-SERVER:PORT CMD options 查看节点
-
部属管理界面
编写yaml文件 kube-dashbord.yaml # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- # ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1beta2 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.2 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. - --apiserver-host=http://10.26.24.77:18080 volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard 注意:显示指定API_SERVER的地址 发布应用: kubectl -s http://10.26.24.77:18080 crate -f kube-dashboard.yaml 查询节点状态: kubectl -s http://10.26.24.77:18080 get pods --all-namespace
-
部属kube-proxy守护进程
注意:kube-proxy的类型为DaemonSet,意为在指定的节点都会创建一个Node 编写yaml文件: apiVersion: v1 kind: ServiceAccount metadata: name: kube-proxy namespace: kube-system labels: addonmanager.kubernetes.io/mode: Reconcile --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: system:kube-proxy labels: addonmanager.kubernetes.io/mode: Reconcile subjects: - kind: ServiceAccount name: kube-proxy namespace: kube-system roleRef: kind: ClusterRole name: system:node-proxier apiGroup: rbac.authorization.k8s.io # Please keep kube-proxy configuration in-sync with: # cluster/saltbase/salt/kube-proxy/kube-proxy.manifest --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: labels: k8s-app: kube-proxy addonmanager.kubernetes.io/mode: Reconcile name: kube-proxy namespace: kube-system spec: selector: matchLabels: k8s-app: kube-proxy updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 10% template: metadata: labels: k8s-app: kube-proxy annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: # {{pod_priority}} hostNetwork: true nodeSelector: # beta.kubernetes.io/kube-proxy-ds-ready: "true" tolerations: - operator: "Exists" effect: "NoExecute" - operator: "Exists" effect: "NoSchedule" containers: - name: kube-proxy image: gcr.io/google_containers/kube-proxy-amd64:v1.9.2 resources: requests: cpu: 150m command: - /bin/sh - -c - kube-proxy --cluster-cidr=10.0.0.0/24 --master=http://10.26.24.77:18080 --oom-score-adj=-998 1>>/var/log/kube-proxy.log 2>&1 env: securityContext: privileged: true volumeMounts: - mountPath: /var/log name: varlog readOnly: false - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - mountPath: /lib/modules name: lib-modules readOnly: true volumes: - name: varlog hostPath: path: /var/log - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate - name: lib-modules hostPath: path: /lib/modules serviceAccountName: kube-proxy 注意:指定Master参数为API-SERVER地址 发布: kubectl -s http://10.26.24.77:18080 crate -f kube-proxy.yaml 查询节点状态: kubectl -s http://10.26.24.77:18080 get pods --all-namespace
-
部属kube-dns组件
编写yaml文件 # Copyright 2016 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml # in sync with this file. # Warning: This is a file generated from the base underscore template file: kube-dns.yaml.base apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.0.0.254 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP --- apiVersion: v1 kind: ServiceAccount metadata: name: kube-dns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: tolerations: - key: "CriticalAddonsOnly" operator: "Exists" volumes: - name: kube-dns-config configMap: name: kube-dns optional: true containers: - name: kubedns image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: memory: 170Mi requests: cpu: 100m memory: 70Mi livenessProbe: httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 3 timeoutSeconds: 5 args: - --domain=cluster.local. - --dns-port=10053 - --config-dir=/kube-dns-config - --v=2 - --kube-master-url=http://10.26.24.77:18080 env: - name: PROMETHEUS_PORT value: "10055" ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP volumeMounts: - name: kube-dns-config mountPath: /kube-dns-config - name: dnsmasq image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8 livenessProbe: httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - -v=2 - -logtostderr - -configDir=/etc/k8s/dns/dnsmasq-nanny - -restartDnsmasq=true - -- - -k - --cache-size=1000 - --no-negcache - --log-facility=- - --server=/cluster.local/127.0.0.1#10053 - --server=/in-addr.arpa/127.0.0.1#10053 - --server=/ip6.arpa/127.0.0.1#10053 ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP # see: https://github.com/kubernetes/kubernetes/issues/29055 for details resources: requests: cpu: 150m memory: 20Mi volumeMounts: - name: kube-dns-config mountPath: /etc/k8s/dns/dnsmasq-nanny - name: sidecar image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8 livenessProbe: httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - --v=2 - --logtostderr - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: memory: 20Mi cpu: 10m dnsPolicy: Default # Don't use cluster DNS. serviceAccountName: kube-dns 注意:kube-master-url 改为实际的API-SERVER地址 发布: kubectl -s http://10.26.24.77:18080 crate -f kube-dns.yaml 查询节点状态: kubectl -s http://10.26.24.77:18080 get pods --all-namespace
常用操作
-
Kubenetes
1. 查询全部Node状态 kubectl get pods --all-namespace 2. 查询单节点详情 kubectl describe --namespace=kube-system po pod名称 3. 发布K8s-APP kubectl create -f ***.yaml 4. 删除k8s-App kubectl delete -f ***.yaml 5. 访问DashBoard kubectl proxy 打开:http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ 6. 查看各功能点参数 将对应的二进制文件上传到ubuntue服务器 执行 ./应用 --help,例 ./kube-proxy --help
-
Dcoker:
1. 删除镜像 # 删除指定镜像 k8s为镜像名称中的关键词 docker rmi --force `docker images | grep k8s | awk '{print $3}'` # 删除无标签的镜像 docker rmi `docker images -q | awk '/^<none>/ { print $3 }'` 2. 删除容器 # 删除全部容器 docker rm `docker ps -a -q` # 删除指定容器 docker rm $(docker ps -a | grep 容器关键词 | awk "{print $3}")
参考资料
-
Kubenetes官网:
-
Kubenetes社区:
-
Docker Cloud
https://cloud.docker.com/swarm -
Github
-
Kube-dashboard
-
镜像资料
-
相关文档
-
Kubenetes插件yaml文件位于
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons -
镜像等相关资料