k8sdocker

k8s 1.6.1 安装(国内加速)

2017-04-14  本文已影响2390人  老吕子

参考

https://mritd.me/2016/10/29/set-up-kubernetes-cluster-by-kubeadm/
http://www.jianshu.com/p/4f5066dad9b4
http://kubernetes.io/docs/admin/addons/
https://github.com/kubernetes/kubernetes/issues/43815
https://github.com/kubernetes/kubernetes/pull/43835
https://www.addops.cn/post/kubernetes-deployment.html

环境

centos 7.2
docker-engine-1.12.6(使用阿里云加速器)
k8s 1.6.1
主机名可解析

打包rpm

yum  install git -y
git clone https://github.com/kubernetes/release && cd release/rpm && ./docker-build.sh
[root@cloud4ourself-mykc1 release]# git rev-parse --short HEAD
ee84be6

安装(master and nodes)

echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
sysctl -p

yum install output/x86_64/kube*.rpm -y

此处kubeadm 1.6.0存在一个bug,https://github.com/kubernetes/kubernetes/issues/43815,
1.6.1已解决

下载docker images

docker pull 4admin2root/kube-controller-manager-amd64:v1.6.0
docker pull 4admin2root/kube-scheduler-amd64:v1.6.0
docker pull 4admin2root/kube-apiserver-amd64:v1.6.0
docker pull 4admin2root/etcd-amd64:3.0.17
docker pull 4admin2root/kube-proxy-amd64:v1.6.0
docker pull  4admin2root/k8s-dns-sidecar-amd64:1.14.1
docker pull  4admin2root/k8s-dns-dnsmasq-nanny-amd64:1.14.1
docker pull  4admin2root/pause-amd64:3.0
docker pull 4admin2root/etcd:2.2.1

docker pull 4admin2root/node:v1.1.0
docker pull 4admin2root/cni:v1.6.1
docker pull 4admin2root/kube-policy-controller:v0.5.4


docker tag 4admin2root/kube-controller-manager-amd64:v1.6.0    gcr.io/google_containers/kube-controller-manager-amd64:v1.6.0
docker tag 4admin2root/kube-scheduler-amd64:v1.6.0             gcr.io/google_containers/kube-scheduler-amd64:v1.6.0
docker tag 4admin2root/kube-apiserver-amd64:v1.6.0             gcr.io/google_containers/kube-apiserver-amd64:v1.6.0
docker tag 4admin2root/etcd-amd64:3.0.17                       gcr.io/google_containers/etcd-amd64:3.0.17
docker tag 4admin2root/kube-proxy-amd64:v1.6.0                 gcr.io/google_containers/kube-proxy-amd64:v1.6.0
docker tag  4admin2root/k8s-dns-sidecar-amd64:1.14.1           gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 
docker tag  4admin2root/k8s-dns-dnsmasq-nanny-amd64:1.14.1     gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
docker tag  4admin2root/pause-amd64:3.0                        gcr.io/google_containers/pause-amd64:3.0
docker tag  4admin2root/etcd:2.2.1                             gcr.io/google_containers/etcd:2.2.1

docker tag  4admin2root/node:v1.1.0   quay.io/calico/node:v1.1.0
docker tag  4admin2root/cni:v1.6.1    quay.io/calico/cni:v1.6.1
docker tag  4admin2root/kube-policy-controller:v0.5.4  quay.io/calico/kube-policy-controller:v0.5.4

master执行init(使用下载版本kubeadm)

[root@cloud4ourself-mykc2 ~]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.0
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] WARNING: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [cloud4ourself-mykc2.novalocal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.9.5.107]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 23.784875 seconds
[apiclient] Waiting for at least one node to register
[apiclient] First node has registered after 4.502966 seconds
[token] Using token: 8d92f5.922276a553ed2847
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  sudo cp /etc/kubernetes/admin.conf $HOME/
  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 8d92f5.922276a553ed2847 10.9.5.107:6443
[root@cloud4ourself-mykc2 ~]# kubectl --kubeconfig /etc/kubernetes/admin.conf get nodes
NAME                            STATUS     AGE       VERSION
cloud4ourself-mykc2.novalocal   NotReady   9m        v1.6.1
[root@cloud4ourself-mykc2 ~]# kubectl --kubeconfig /etc/kubernetes/admin.conf get pod --all-namespaces
NAMESPACE     NAME                                                    READY     STATUS    RESTARTS   AGE
kube-system   etcd-cloud4ourself-mykc2.novalocal                      1/1       Running   0          9m
kube-system   kube-apiserver-cloud4ourself-mykc2.novalocal            1/1       Running   0          9m
kube-system   kube-controller-manager-cloud4ourself-mykc2.novalocal   1/1       Running   0          9m
kube-system   kube-dns-3913472980-jgk3f                               0/3       Pending   0          10m
kube-system   kube-proxy-9ghw7                                        1/1       Running   0          10m
kube-system   kube-scheduler-cloud4ourself-mykc2.novalocal            1/1       Running   0          9m

#docker pull quay.io/calico/cni:v1.6.1
#docker pull quay.io/calico/node:v1.1.0
#docker pull quay.io/calico/kube-policy-controller:v0.5.4

[root@cloud4ourself-mykc2 ~]# kubectl apply -f http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml --kubeconfig /etc/kubernetes/admin.conf
configmap "calico-config" created
daemonset "calico-etcd" created
service "calico-etcd" created
daemonset "calico-node" created
deployment "calico-policy-controller" created
clusterrolebinding "calico-cni-plugin" created
clusterrole "calico-cni-plugin" created
serviceaccount "calico-cni-plugin" created
clusterrolebinding "calico-policy-controller" created
clusterrole "calico-policy-controller" created
serviceaccount "calico-policy-controller" created
[root@cloud4ourself-mykc2 ~]# kubectl --kubeconfig /etc/kubernetes/admin.conf get pod --all-namespaces
NAMESPACE     NAME                                                    READY     STATUS              RESTARTS   AGE
kube-system   calico-etcd-q6p11                                       1/1       Running             0          8s
kube-system   calico-node-j3b47                                       0/2       ContainerCreating   0          7s
kube-system   calico-policy-controller-2561685917-8sj5l               0/1       Pending             0          7s
kube-system   etcd-cloud4ourself-mykc2.novalocal                      1/1       Running             0          12m
kube-system   kube-apiserver-cloud4ourself-mykc2.novalocal            1/1       Running             0          12m
kube-system   kube-controller-manager-cloud4ourself-mykc2.novalocal   1/1       Running             0          12m
kube-system   kube-dns-3913472980-jgk3f                               0/3       Pending             0          13m
kube-system   kube-proxy-9ghw7                                        1/1       Running             0          13m
kube-system   kube-scheduler-cloud4ourself-mykc2.novalocal            1/1       Running             0          12m

[root@cloud4ourself-mykc2 ~]# kubectl --kubeconfig /etc/kubernetes/admin.conf get pod --all-namespaces
NAMESPACE     NAME                                                    READY     STATUS    RESTARTS   AGE
kube-system   calico-etcd-q6p11                                       1/1       Running   0          9m
kube-system   calico-node-j3b47                                       2/2       Running   0          9m
kube-system   calico-policy-controller-2561685917-8sj5l               1/1       Running   0          9m
kube-system   etcd-cloud4ourself-mykc2.novalocal                      1/1       Running   0          21m
kube-system   kube-apiserver-cloud4ourself-mykc2.novalocal            1/1       Running   0          21m
kube-system   kube-controller-manager-cloud4ourself-mykc2.novalocal   1/1       Running   0          21m
kube-system   kube-dns-3913472980-jgk3f                               3/3       Running   0          22m
kube-system   kube-proxy-9ghw7                                        1/1       Running   0          22m
kube-system   kube-scheduler-cloud4ourself-mykc2.novalocal            1/1       Running   0          21m
[root@cloud4ourself-mykc2 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@cloud4ourself-mykc2 ~]# source ~/.bash_profile

其他节点

#docker pull quay.io/calico/cni:v1.6.1
#docker pull quay.io/calico/node:v1.1.0
#docker pull quay.io/calico/kube-policy-controller:v0.5.4

[root@cloud4ourself-mykc3 ~]#  kubeadm join --token 8d92f5.922276a553ed2847 10.9.5.107:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "10.9.5.107:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.9.5.107:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://10.9.5.107:6443"
[discovery] Successfully established connection with API Server "10.9.5.107:6443"
[bootstrap] Detected server version: v1.6.0
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

上一篇 下一篇

猜你喜欢

热点阅读