ARM64手动安装kubernetes(无证书)
0.准备工作
-
这次的安装部署是在arm64上面执行的,板子为RK3328,系统为Ubuntu16.04。
主要有一个Master和两个Node,主要需要安装以下的组件:
角色 | IP | 组件 |
---|---|---|
k8s-master | 172.16.32.10 | etcd,apiserver,controller-manager,scheduler,flannel |
k8s-node1 | 172.16.32.11 | kubelet,kube-proxy,flannel,docker |
k8s-node2 | 172.16.32.12 | kubelet,kube-proxy,flannel,docker |
这里Master的组件是:kube-apiserver,kube-controller-manager,kube-scheduler
-
提前下载好相关的组件
- etcd-v3.3.5-linux-arm64.tar.gz
- flannel-v0.10.0-linux-arm64.tar.gz
- kubernetes-node-linux-arm64.tar.gz
- kubernetes-server-linux-arm64.tar.gz
具体下载到GitHub上下载,如果不知道网址,可以直接搜 xxx release(比如etcd release),一般出来第一个就是了,然后选择对应版本下载
1.部署Master
-
部署前的初始化
-
首先以root用户执行以下动作
-
关闭防火墙
ufw disable
-
安装ntp(如果没有安装)
sudo apt-get install ntp
-
添加主机名和IP到/etc/hosts
172.16.32.10 k8s-master 172.16.32.11 k8s-node1 172.16.32.12 k8s-node2
-
新建一个k8s-master用户,并且赋予root权限
useradd -m -d /home/k8s-master -s /bin/bash k8s-master sudo sed -i -r '/root.*ALL=\(ALL.*ALL/a \k8s-master ALL=\(ALL\) NOPASSWD: ALL' /etc/sudoers
-
切换到k8s-master用户
su k8s-master
-
-
切换到k8s-master用户执行以下动作
-
创建文件夹,用来保存bin和组件的配置文件
sudo mkdir -p ~/kubernetes/bin && sudo mkdir ~/kubernetes/cfg
-
设置环境变量,因为我们的二进制放在自己指定的路径,加入环境变量就方便使用。
echo "export PATH=\$PATH:/home/k8s-master/kubernetes/bin" >> ~/.bashrc
source ~/.bashrc
-
-
-
安装ETCD
-
解压etcd-v3.3.5-linux-arm64.tar.gz
sudo tar -zxvf etcd-v3.3.5-linux-arm64.tar.gz
-
复制解压目录下的etcd,etcdctl到~/kubernetes/bin
sudo cp etcd-v3.3.5-linux-arm64/etcd* ~/kubernetes/bin
-
创建etcd配置文件
sudo vi ~/kubernetes/cfg/ectd.cfg
ETCD_NAME="default" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
注意这里在启动etcd服务之前,需要先创建好/var/lib/etcd目录,存放etcd的数据
-
创建etcd服务文件
sudo vi /lib/systemd/system/etcd.service
[Unit] Description=Etcd Server After=network.target [Service] Type=simple WorkingDirectory=/var/lib/etcd Environment=ETCD_UNSUPPORTED_ARCH=arm64 EnvironmentFile=-/home/k8s-master/kubernetes/cfg/etcd.conf # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=\$(nproc) /home/k8s-master/kubernetes/bin/etcd" Type=notify [Install] WantedBy=multi-user.target
注意,这里需要加上一句Environment=ETCD_UNSUPPORTED_ARCH=arm64,因为当前etcd要支持arm必须这样子,否则etcd无法启动。
-
启动ETCD服务
sudo systemctl daemon-reload sudo systemctl start etcd sudo systemctl enable etcd
可以通过systemctl status etcd 查看ETCD的状态,如果出错,可以看log日志:/var/log/syslog
-
创建ETCD网络
etcdctl set /coreos.com/network/config "{\"Network\":\"10.1.0.0/16\",\" Backend \"{\"Type\":\"vxlan\"}}"
如果不指定Backend类型为vxlan,在安装flannel时会报错:不支持UDP Backend,因为flannel默认Backend是UDP,arm不支持,所以在创建ETCD网络的时候需要指定Backend类型
-
-
安装Master三个重要组件:kube-apiserver,kube-controller-manager,kube-scheduler
-
解压kubernetes-server-linux-arm64.tar.gz
sudo tar -zxvf kubernetes-server-linux-arm64.tar.gz -C kubernetes-server-linux-arm64
-
复制二进制bin文件到 ~/kubernetes/bin
sudo cp kubernetes-server-linux-arm64/kubernetes/server/bin/kube-apiserver ~/kubernetes/bin
sudo cp kubernetes-server-linux-arm64/kubernetes/server/bin/kube-controller-manager ~/kubernetes/bin
sudo cp kubernetes-server-linux-arm64/kubernetes/server/bin/kube-scheduler ~/kubernetes/bin
-
安装kube-apiserver
-
添加kube-apisever配置文件
sudo vi ~/kubernetes/cfg/kube-apiserver
# --logtostderr=true: log to standard error instead of files KUBE_LOGTOSTDERR="--logtostderr=true" # --v=0: log level for V logs KUBE_LOG_LEVEL="--v=4" # --etcd-servers=[]: List of etcd servers to watch (http://ip:port), # comma separated. Mutually exclusive with -etcd-config KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379" # --insecure-bind-address=127.0.0.1: The IP address on which to serve the --insecure-port. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" # --insecure-port=8080: The port on which to serve unsecured, unauthenticated access. KUBE_API_PORT="--insecure-port=8080" # --allow-privileged=false: If true, allow privileged containers. KUBE_ALLOW_PRIV="--allow-privileged=false" # --service-cluster-ip-range=<nil>: A CIDR notation IP range from which to assign service cluster IPs. # This must not overlap with any IP ranges assigned to nodes for pods. KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=169.169.0.0/16" # --admission-control="AlwaysAdmit": Ordered list of plug-ins # to do admission control of resources into cluster. # Comma-delimited list of: # LimitRanger, AlwaysDeny, SecurityContextDeny, NamespaceExists, # NamespaceLifecycle, NamespaceAutoProvision, AlwaysAdmit, # ServiceAccount, DefaultStorageClass, DefaultTolerationSeconds, ResourceQuota KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota"
-
添加api-server服务文件
sudo vi /lib/systemd/system/kube-apiserver.service
[Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/home/k8s-master/kubernetes/cfg/kube-apiserver ExecStart=/home/k8s-master/kubernetes/bin/kube-apiserver ${KUBE_LOGTOSTDERRR} \ ${KUBE_LOG_LEVEL} \ ${KUBE_ETCD_SERVERS} \ ${KUBE_API_ADDRESS} \ ${KUBE_API_PORT} \ ${KUBE_ALLOW_PRIV} \ ${KUBE_SERVICE_ADDRESSES} \ ${KUBE_ADMISSION_CONTROL} Restart=on-failure [Install] WantedBy=multi-user.target
-
启动服务
sudo systemctl daemon-reload sudo systemctl start kube-apiserver sudo systemctl enable kube-apiserver
-
-
安装kube-controller-manager
-
添加配置文件
sudo vi ~/kubernetes/cfg/kube-controller-manager
KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=4" KUBE_MASTER="--master=172.16.32.10:8080"
-
添加服务文件
sudo vi /lib/systemd/system/kube-controller-manager.service
[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/home/k8s-master/kubernetes/cfg/kube-controller-manager ExecStart=/home/k8s-master/kubernetes/bin/kube-controller-manager ${KUBE_LOOGTOSTDERR} \ ${KUBE_LOG_LEVEL} \ ${KUBE_MASTER} Restart=on-failure [Install] WantedBy=multi-user.target
-
启动服务
sudo systemctl daemon-reload sudo systemctl strat kube-controller-manager sudo systemctl enable kube-controller-manager
-
-
安装kube-scheduler
-
添加配置文件
sudo vi ~/kubernetes/cfg/kube-scheduler
# --logtostderr=true: log to standard error instead of files KUBE_LOGTOSTDERR="--logtostderr=true" # --v=0: log level for V logs KUBE_LOG_LEVEL="--v=4" KUBE_MASTER="--master=172.16.32.10:8080"
-
添加服务文件
sudo vi ~/lib/systemd/system/kube-scheduler.service
[Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/home/k8s-master/kubernetes/cfg/kube-scheduler ExecStart=/home/$USER_HOME_DIR/kubernetes/bin/kube-scheduler ${KUBE_LOGTOSTDERRR} \ ${KUBE_LOG_LEVEL} \ ${KUBE_MASTER} Restart=on-failure [Install] WantedBy=multi-user.target
-
启动服务
sudo systemctl daemon-reload sudo systemctl start kube-scheduler sudo systemctl enable kube-scheduler
-
-
-
安装flannel
-
解压flannel-v0.10.0-linux-arm64.tar.gz
sudo tar -zxvf flannel-v0.10.0-linux-arm64.tar.gz -C flannel-v0.10.0-linux-arm64
-
先复制二进制文件到~/kubernetes/bin
sudo cp flannel-v0.10.0-linux-arm64/* ~/kubernetes/bin
-
安装flannel
-
添加配置文件
sudo vi ~/kubernetes/cfg/flanneld.conf
FLANNEL_ETCD_ENDPOINTS="http://172.16.32.10:2379" FLANNEL_IFACE="eth0" FLANNEL_ETCD_PREFIX="/coreos.com/network" FLANNEL_OPTIONS=""
注意,如果有多个网卡,需要指定具体的网卡,要加上FLANNEL_IFACE=xxx选项,在启动时候加上-iface=$(FLANNEL_IFACE)
-
添加服务文件
sudo vi /lib/systemd/system/flanneld.service
[Unit] Description=Flanneld Documentation=https://github.com/coreos/flannel After=network.target Before=docker.service [Service] EnvironmentFile=-/home/k8s-master/kubernetes/cfg/flanneld.conf ExecStart=/home/k8s-master/kubernetes/bin/flanneld \ -etcd-endpoints=\${FLANNEL_ETCD_ENDPOINTS} \ -iface=\${FLANNEL_IFACE} \ -etcd-prefix=\${FLANNEL_ETCD_PREFIX} \ \${FLANNEL_OPTIONS} Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target RequiredBy=docker.service
-
启动服务
sudo systemctl daemon-reload sudo systemctl start flanneld sudo systemctl enable flanneld
-
-
配置docker网络,覆盖docker网络
sudo cat /run/flannel/subnet.env | grep "FLANNEL_SUBNET" | cut -d= -f2 # 将结果保存起来(10.1.90.1/24) #创建文件,写入 sudo echo " [Service] ExecStart= ExecStart=/usr/bin/dockerd --bip=10.1.90.1/24 --mtu=1472 " > /etc/systemd/system/docker.service.d/docker.conf #重启docker sudo systemctl daemon-reload sudo systemctl start docker sudo systemctl enable docker
-
2.部署Node
Node的部署与Master大致一样,都是添加配置文件,服务文件,启动服务。
-
同样在部署前需要做些初始化
-
关闭防火墙
ufw disable
-
安装ntp(如果没有安装)
sudo apt-get install ntp
-
添加主机名和IP到/etc/hosts
172.16.32.10 k8s-master 172.16.32.11 k8s-node1 172.16.32.12 k8s-node2
-
新建一个k8s-node1用户,并且赋予root权限
useradd -m -d /home/k8s-node1 -s /bin/bash k8s-node1 sudo sed -i -r '/root.*ALL=\(ALL.*ALL/a \k8s-node1 ALL=\(ALL\) NOPASSWD: ALL' /etc/sudoers
-
切换到k8s-node1
su k8s-node1
-
切换到k8s-node1用户执行以下动作
-
创建文件夹,用来保存bin和组件的配置文件
sudo mkdir -p ~/kubernetes/bin && sudo mkdir ~/kubernetes/cfg
-
-
-
安装kubelet,kube-proxy
-
解压kubernetes-node-linux-arm64.tar.gz
sudo tar -zxvf kubernetes-node-linux-arm64.tar.gz kubernetes-node-linux-arm64
-
复制二进制bin到~/kubernetes/bin
sudo cp kubernetes-node-linux-arm64/kubernetes/node/bin/kubelet ~/kubernetes/bin
sudo cp kubernetes-node-linux-arm64/kubernetes/node/bin/kube-proxy ~/kubernetes/bin
-
安装kubelet
-
添加配置文件
sudo vi ~/kubernetes/cfg/kubelet.kubeconfig
apiVersion: v1 kind: Config clusters: - cluster: server: http://172.16.32.10:8080/ name: local contexts: - context: cluster: local name: local current-context: local
sudo vi ~/kubernetes/cfg/kubelet
# --logtostderr=true: log to standard error instead of files KUBE_LOGTOSTDERR="--logtostderr=true" # --v=0: log level for V logs KUBE_LOG_LEVEL="--v=4" # --address=0.0.0.0: The IP address for the Kubelet to serve on (set to 0.0.0.0 for all interfaces) NODE_ADDRESS="--address=172.16.32.11" # --port=10250: The port for the Kubelet to serve on. Note that "kubectl logs" will not work if you set this flag. NODE_PORT="--port=10250" # --hostname-override="": If non-empty, will use this string as identification instead of the actual hostname. NODE_HOSTNAME="--hostname-override=ubuntu-node1" # Path to a kubeconfig file, specifying how to connect to the API server. KUBELET_KUBECONFIG="--kubeconfig=~/kubernetes/cfg/kubelet.kubeconfig" #KUBELET_KUBECONFIG="--api-servers=http://${MASTER_ADDRESS}:8080" # --allow-privileged=false: If true, allow containers to request privileged mode. [default=false] KUBE_ALLOW_PRIV="--allow-privileged=false" # DNS info KUBELET__DNS_IP="--cluster-dns=169.169.0.2" KUBELET_DNS_DOMAIN="--cluster-domain=cluster.local" KUBELET_SWAP="--fail-swap-on=false" KUBELET_ARGS="--pod_infra_container_image=hub.c.163.com/allan1991/pause-amd64:3.0"
在kubernetes1.10版本中,KUBELET_KUBECONFIG修改为了用一个yaml的配置文件,而不再是以前的指定--api-server的方式,需要相应更改过来,否则kubelet无法启动。
-
添加服务文件
sudo vi /lib/systemd/system/kubelet.service
[Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/home/k8s-node1/kubernetes/cfg/kubelet ExecStart=/home/k8s-node1/kubernetes/bin/kubelet ${KUBE_LOGTOSTDERR} \ ${KUBE_LOG_LEVEL} \ ${NODE_ADDRESS} \ ${NODE_PORT} \ ${NODE_HOSTNAME} \ ${KUBELET_KUBECONFIG} \ ${KUBE_ALLOW_PRIV} \ ${KUBELET__DNS_IP} \ ${KUBELET_DNS_DOMAIN} \ $KUBELET_SWAP Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
-
启动服务
sudo systemctl daemon-reload sudo systemctl start kubelet sudo systemctl enable kubelet
-
-
安装kube-proxy
-
添加配置文件
sudo vi ~/kubernetes/cfg/kube-proxy
# --logtostderr=true: log to standard error instead of files KUBE_LOGTOSTDERR="--logtostderr=true" # --v=0: log level for V logs KUBE_LOG_LEVEL="--v=4" # --hostname-override="": If non-empty, will use this string as identification instead of the actual hostname. NODE_HOSTNAME="--hostname-override=k8s-node1" # --master="": The address of the Kubernetes API server (overrides any value in kubeconfig) KUBE_MASTER="--master=http://172.16.32.10:8080"
-
添加服务文件
sudo vi /lib/systemd/system/kube-proxy.service
[Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/home/k8s-node1/kubernetes/cfg/kube-proxy ExecStart=/home/k8s-node1/kubernetes/bin/kube-proxy ${KUBE_LOGTOSTDERR} \ ${KUBE_LOG_LEVEL} \ ${NODE_HOSTNAME} \ ${KUBE_MASTER} Restart=on-failure [Install] WantedBy=multi-user.target
-
启动服务
sudo systemctl daemon-reload sudo systemctl start kube-proxy sudo systemctl enable kube-proxy
-
-
安装flannel
安装方法和在Master中安装的一样,可以参考上面进行安装。
3.安装过程遇到过的问题记录
-
failed to start containermanager system validation failed - following cgroup subsystem not mounted:[memory]
解决方法:
修/etc/default/grub:
添加:
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
更新:
update-grub
重启:
sudo systemctl reboot -i
如果grub没有安装,可以先安装:
sudo apt install grub-efi-arm64 grub-efi-arm64-bin grub2-common
-
etcdmain: etcd on unsupported platform without ETCD_UNSUPPORTED_ARCH=arm64 set
解决方法:
在etcd的服务文件中加上一句:
Environment=ETCD_UNSUPPORTED_ARCH=arm64 #此行必须添加
-
UDP backend is not supported on this architecture
解决方法:
由于flannel预设的
backend type
是udp,但arm64不支持,所以在etcd中需要指定backend参数:etcdctl set /coreos.com/network/config "{\"Network\":\"10.1.0.0/16\",\" Backend \"{\"Type\":\"vxlan\"}}"
-