docker&k8s集群安装教程

2020-12-26  本文已影响0人  rekca

1. 环境(Centos)

master:Centos7,192.168.99.110
node:Centos,192.168.99.151
node:Centos,192.168.99.177

镜像准备:

docker pull k8s.gcr.io/kube-apiserver:v1.20.1
docker pull k8s.gcr.io/kube-controller-manager:v1.20.1
docker pull k8s.gcr.io/kube-scheduler:v1.20.1
docker pull k8s.gcr.io/kube-proxy:v1.20.1
docker pull k8s.gcr.io/pause:3.2
docker pull k8s.gcr.io/etcd:3.4.13-0
docker pull k8s.gcr.io/coredns:1.7.0

到目前为止(2020-12-26号),v1.20.1的各个组件的镜像还没有同步到hub 上https://hub.docker.com/u/mirrorgooglecontainers,可以从下面拉取镜像列表

docker pull rekca/kube-apiserver:v1.20.1
docker pull rekca/kube-controller-manager:v1.20.1
docker pull rekca/kube-scheduler:v1.20.1
docker pull rekca/kube-proxy:v1.20.1
docker pull rekca/pause:3.2
docker pull rekca/etcd:3.4.13-0
docker pull rekca/coredns:1.7.0

# tag
docker tag rekca/kube-apiserver:v1.20.1 k8s.gcr.io/kube-apiserver:v1.20.1
docker tag rekca/kube-controller-manager:v1.20.1 k8s.gcr.io/kube-controller-manager:v1.20.1
docker tag rekca/kube-scheduler:v1.20.1 k8s.gcr.io/kube-scheduler:v1.20.1
docker tag rekca/kube-proxy:v1.20.1 k8s.gcr.io/kube-proxy:v1.20.1
docker tag rekca/pause:3.2 k8s.gcr.io/pause:3.2
docker tag rekca/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag rekca/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0

2. 安装docker,master,node节点都需要安装,因为kubecadm默认用容器方式启动各个组件,所以master上也要安装docker

# (Install Docker CE)
## Set up the repository
### Install required packages
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
## Add the Docker repository
sudo yum-config-manager --add-repo \
  https://download.docker.com/linux/centos/docker-ce.repo
# Install Docker CE
sudo yum update -y && sudo yum install -y \
  containerd.io-1.2.13 \
  docker-ce-19.03.11 \
  docker-ce-cli-19.03.11
## Create /etc/docker
sudo mkdir /etc/docker
# Set up the Docker daemon
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
# Create /etc/systemd/system/docker.service.d
sudo mkdir -p /etc/systemd/system/docker.service.d
# Restart Docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker

遇到问题:启动docker后,执行docker命令,报错 dial unix /var/run/docker.sock: connect: permission denied,这是由于当前用户与启动docker的用户不是一个,造成的权限访问问题
解决方案:

# 添加docker用户组
groupadd docker 

# 把当前用户加入docker用户组
gpasswd -a ${USER} docker

# 查看是否添加成功
cat /etc/group | grep ^docker

# 重启docker
systemctl restart docker

# 更新用户组
newgrp docker
# 再执行docker命令,就不会报错了
docker images

3. 安装kubernetes

通过部署工具安装k8s

Bootstrapping clusters with kubeadm
Installing Kubernetes with kops
Installing Kubernetes with Kubespray

使用kubeadm安装的优势:

安装kubeadm:
让iptables看到桥接流量

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

配置k8s阿里云源

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubelet、kubeadm、kubectl

# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

启动kebelet

sudo systemctl daemon-reload
sudo systemctl restart kubelet

使用kubeadm创建k8s集群

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

运行完后有如下提示内容

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.99.110:6443 --token 9wv08y.fapyc97l762n6lya \
    --discovery-token-ca-cert-hash sha256:93b8cab714eae4c41ad4a139c4f3969f8b1be9d7036091aeda36e8ce975b77b9

获取token

kubeadm token list

默认情况下,token在24h后过期,过期后如果还想加入新的node节点,可以执行

kubeadm token create

获取--discovery-token-ca-cert-hash的值

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'

最后组装kubeadm join的各个参数即可:
kubeadm join 192.168.99.110:6443 --token token
--discovery-token-ca-cert-hash sha256:hash

安装pod网络附加组件(calico、flannel二选一)

calico
wiki:https://docs.projectcalico.org/getting-started/kubernetes/quickstart

kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml

watch kubectl get pods -n calico-system # 一直等到所有pod running

flannel
https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml

static pod

static pod是k8s中一种特殊的pod,kubelet原生支持static pod的调度(kubelet无法独立创建Deployment,只能创建pod)
static pod的yaml配置放到了节点下面的文件:/etc/kubernetes/manifests
用kubeadm默认安装的etcd、kube-apiserver、kube-controller-manager、kube-scheduler 都是static pod

/etc/kubernetes/manifests
  etcd.yaml
  kube-apiserver.yaml
  kube-controller-manager.yaml
  kube-scheduler.yaml

安装遇到的坑

  1. node节点kube-proxy一直处于ContainerCreating状态
    原因:网络原因,k8s.gcr.io/kube-proxy:v1.20.1,k8s.gcr.io/pause镜像需要提前下载到各个节点上
上一篇 下一篇

猜你喜欢

热点阅读