centos下k8s的安装过程

2020-04-19  本文已影响0人  菜菜少吃菜

文首声明: 本文旨在记录过程,细节不会详细介绍。


本次安装使用在win10创建的两个centos虚拟机,k8s安装依旧选择kubeadm这个安装神器。

简单介绍kubeadm:

安装kubeadm实际上会帮我们安装四个软件包:kubelet、kubeadm、kubectl、kubernetes-cni。

环境准备

node1(master):

node2:

kubernetes:

yum源

注意,如果你的机器是arm架构,源也要对应修改为https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-arm64/

# root @ centos1 in ~ [3:49:02]
$ cd /etc/yum.repos.d/

# root @ centos1 in /etc/yum.repos.d [3:49:35]
$ cat kubernetes.repo
[kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
enable=1
yum clean all
yum makecache

软件包安装

# docker 安装
$ yum install -y docker-ce
# kubeadm安装,如果想安装其他版本,请更换版本号
$ yum install -y kubeadm-1.14.1

前置条件

kubeadm安装前会对swap、镜像做预检。

关闭防火墙

#关闭防火墙和selinux
systemctl stop firewalld && systemctl disable firewalld

关闭selinux

sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0

iptable防火墙设置

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

关闭swap

为什么要关闭swap


#临时关闭
swapoff -a
#永久关闭,注释swap
$ cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon Mar 23 07:57:47 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos_centos1-root /                       xfs     defaults        0 0
UUID=03454a1c-18b4-422b-bb15-ac808270390a /boot                   xfs     defaults        0 0
/dev/mapper/centos_centos1-home /home                   xfs     defaults        0 0
#/dev/mapper/centos_centos1-swap swap                    swap    defaults        0 0

$ free -m # swap全为0
              total        used        free      shared  buff/cache   available
Mem:           7991        1312        5070          41        1609        6560
Swap:             0           0           0

k8s安装

启动kubelet

systemctl start kubelet
systemctl enable kubelet

初始化

注意发生任何错误都可以kubeadm reset重置。

# 可以看到要想顺利安装k8s,需要master上有这些镜像
$ kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.14.1
k8s.gcr.io/kube-controller-manager:v1.14.1
k8s.gcr.io/kube-scheduler:v1.14.1
k8s.gcr.io/kube-proxy:v1.14.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

# 可以f墙,先把镜像拉下来
$ kubeadm config images pull

# 不能翻墙,初始化时指定image-repository
kubeadm init \
    --apiserver-advertise-address=192.168.0.109 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.14.1 \
    --pod-network-cidr=10.244.0.0/16
# 最后看到一段这样的打印代表安装成功
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.0.109:6443 --token wde86i.tmjaf7d18v26zg03 --discovery-token-ca-cert-hash sha256:b05fa53d8f8c10fa4159ca499eb91cf11fbb9b27801b7ea9eb7d5066d86ae366
初始化命令说明:

配置kubectl与api-server的认证

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置flannel网络插件

# 可以看到此时coredns pod为pending状态,需要配置flannel插件
$ kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-gbgzx                  0/1     Pending   0          5m28s
coredns-86c58d9df4-kzljk                  0/1     Pending   0          5m28s
etcd-miwifi-r1cm-srv                      1/1     Running   0          4m40s
kube-apiserver-miwifi-r1cm-srv            1/1     Running   0          4m52s
kube-controller-manager-miwifi-r1cm-srv   1/1     Running   0          5m3s
kube-proxy-9c8cs                          1/1     Running   0          5m28s
kube-scheduler-miwifi-r1cm-srv            1/1     Running   0          4m45s

下载flannel.yaml

# 依旧需要f墙,我会在文末贴一个kube-flannel.yaml
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

安装flannel

kubectl apply -f kube-flannel.yml
# 稍等片刻,查看node节点状态已经为ready
kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
centos1   Ready    master   9h    v1.14.1

注意,默认master节点是不能调度pod的,我们需要打一个污点标记

kubectl taint nodes --all node-role.kubernetes.io/master-

查看一下pod和核心组件的状态

全部running

$ kubectl get pod -A
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-584795fc57-gkz8c                1/1     Running   1          9h
kube-system   coredns-584795fc57-khnh9                1/1     Running   1          9h
kube-system   etcd-centos1                            1/1     Running   1          9h
kube-system   kube-apiserver-centos1                  1/1     Running   1          9h
kube-system   kube-controller-manager-centos1         1/1     Running   3          9h
kube-system   kube-flannel-ds-xplqf                   1/1     Running   2          9h
kube-system   kube-proxy-frgmz                        1/1     Running   1          9h
kube-system   kube-scheduler-centos1                  1/1     Running   4          9h
kube-system   kubernetes-dashboard-645bd89df5-6bbg5   1/1     Running   1          9h
# 全部ok
$ kubectl get componentstatus
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}

加入一个新节点

k8s集群中加入节点极其简单,新节点需要保证已经启动kubelet,关闭selinux,swap,防火墙。
master节点上执行 kubeadm token create --print-join-command获取join命令
新节点上执行join命令,看到以下回显说明安装成功,注意kubelet的版本一定和master上保持一致。
有可能会出现一些奇奇怪怪的错误,基本在网上都能找得到解决方案。

$ kubeadm join 192.168.0.109:6443 --token h2ptf3.462ye8azrpbietgt     --discovery-token-ca-cert-hash sha256:1995d3c35f8863b66344129fddde6a69aeced57ed452658e3500fcbb2fc18784
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 5]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

结束

至此大功告成,dashboard安装见下一篇。

kube-flannel.yml

注意image修改为对应你机器架构的镜像

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
上一篇下一篇

猜你喜欢

热点阅读