Amazing Arch容器

k8s搭建

2019-02-18  本文已影响104人  kxmile

1.0 准备

准备这个CentOS-7-x86_64-Minimal-1810
然后用vmware 或者virtualbox 安装他
1.1 切换阿里的yum源
yum install -y wget
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache
yum install -y net-tools
1.2关闭防火墙
systemctl stop firewalld & systemctl disable firewalld
1.3关闭swap
1.3.1 临时关闭
swapoff -a
1.3.2 重启后生效
vi /etc/fstab
注释掉#/dev/mapper/centos-swap swap ... 这一行
1.4 配置docker 源
yum -y install yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache
1.5 关闭selinux
getenforce
setenforce 0
vi /etc/sysconfig/selinux 关闭掉 大概设置内容为 SELINUX=disabled

2.0 开始k8s

准备3台机子

172.16.253.129
172.16.253.130
172.16.253.131

以上是3台机子ip
2.1
这3台机子都执行

echo "172.16.253.129 srv.master" >> /etc/hosts
echo "172.16.253.130 srv.etcd" >> /etc/hosts
echo "172.16.253.131 srv.node1" >> /etc/hosts

3.1搭建etcd

登录etcd服务器
ssh root@172.16.253.130
下载etcd
yum -y install etcd
3.2 配置config
vi /etc/etcd/etcd.conf

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" 
ETCD_ADVERTISE_CLIENT_URLS="http://srv.etcd:2379" 

3.2 运行etcd

systemctl enable etcd && systemctl start etcd

[root@localhost ~]# netstat -nlp | grep etcd
tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LISTEN      7130/etcd           
tcp6       0      0 :::2379   

3.3配置etcd内网信息

#设置 (就像定义一个变量一样)
etcdctl -C http://172.16.253.130:2379 set /atomic.io/network/config '{"Network":"172.17.0.0/16"}' 
 
#获取
# etcdctl -C http://172.16.253.130:2379 get /atomic.io/network/config
{"Network":"172.17.0.0/16"}

注:
172.17.0.0/16这个是节点主机Docker网卡的网段。
/atomic.io/network/config类似定义的变量名,在节点机中Flanneld配置里的FLANNEL_ETCD_PREFIX项对应。

4.1master服务器上安装服务

ssh root@172.16.253.129
yum -y install kubernetes-master
4.2 配置config
vi /etc/kubernetes/apiserver

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" 
KUBE_ETCD_SERVERS="--etcd-servers=http://srv.etcd:2379" 

vi /etc/kubernetes/config

KUBE_MASTER="--master=http://srv.master:8080"

4.3 启动

systemctl enable kube-apiserver kube-scheduler kube-controller-manager
systemctl start kube-apiserver kube-scheduler kube-controller-manager

[root@localhost ~]# netstat -nlpt | grep kube
tcp6       0      0 :::6443                 :::*                    LISTEN      15190/kube-apiserve 
tcp6       0      0 :::10251                :::*                    LISTEN      15191/kube-schedule 
tcp6       0      0 :::10252                :::*                    LISTEN      15192/kube-controll 
tcp6       0      0 :::8080                 :::*                    LISTEN      15190/kube-apiserve 

4.4
curl http://172.16.253.129:8080/version
4.5 若创建pod认证失败
vi /etc/kubernetes/apiserver

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

在该行删除ServiceAccount SecurityContextDeny 这2个选项
然后systemctl restart kube-apiserver
接着删除掉出错的pod,重新创建

5.0部署node

ssh root@172.16.253.131
5.1
安装docker yum -y install docker
开启启动并启动systemctl enable docker && systemctl start docker

[root@localhost ~]# ip a s docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:35:d7:43:6f brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever

vi /etc/docker/daemon.json

"registry-mirrors":["https://registry.docker-cn.com"]

systemctl restart docker
5.2部署flannel
yum -y install flannel
vi /etc/sysconfig/flanneld

FLANNEL_ETCD_ENDPOINTS="http://srv.etcd:2379" 
FLANNEL_ETCD_PREFIX="/atomic.io/network" 

systemctl enable flanneld && systemctl restart flanneld`

[root@localhost ~]# netstat -nlp | grep flanneld
udp        0      0 172.16.253.131:8285     0.0.0.0:*                           7670/flanneld     

5.3 部署node
yum -y install kubernetes-node
vi /etc/kubernetes/config

KUBE_MASTER="--master=http://srv.master:8080"

vi /etc/kubernetes/kubelet

 KUBELET_HOSTNAME="--hostname-override=srv.node1" 
KUBELET_API_SERVER="--api-servers=http://srv.master:8080"

systemctl enable kubelet kube-proxy && systemctl start kubelet kube-proxy

[root@localhost ~]# netstat -ntlp | grep kube
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      7838/kubelet        
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      7842/kube-proxy     
tcp        0      0 127.0.0.1:10250         0.0.0.0:*               LISTEN      7838/kubelet        
tcp        0      0 127.0.0.1:10255         0.0.0.0:*               LISTEN      7838/kubelet        
tcp6       0      0 :::4194                 :::*                    LISTEN      7838/kubelet  

5.4 处理创建pod 节点出现/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory 不存在bug
5.4.1下载rhsm
yum install *rhsm* -y
5.4.2获取rpm包
wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
5.4.3安装生成
rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem
5.4.4拉取镜像
docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest

6.0验证

登录到master服务器 上
kubectl get nodes
结果:

[root@localhost ~]#  kubectl get nodes
NAME        STATUS    AGE
srv.node1   Ready     1m

上一篇 下一篇

猜你喜欢

热点阅读