k8s环境搭建Ubuntu1804(多master节点集群1.1
2019-05-13 本文已影响0人
萧宵
准备工作
修改主机名称及hosts
sudo vim /etc/cloud/cloud.cfg,将preserve_hostname开关设置为true
sudo vim /etc/hostname修改主机名
修改hosts
sudo vim /etc/hosts
192.168.1.49 cluster.kube.com #虚拟浮动IP
192.168.1.50 master1 #主节点1
192.168.1.51 master2 #主节点2
修改系统参数
sudo vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1
sysctl -p
sudo vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
sysctl --system
1.安装配置keepalived(主节点和备用节点配置稍有不同,如下)
(1)安装
sudo apt install -y keepalived
(2)修改配置文件
sudo vim /etc/keepalived/keepalived.conf
###################################################################################
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1
state BACKUP #主节点MASTER 其它节点BACKUP
interface ens33 #查看本地网卡实际名称
virtual_router_id 51
priority 250 #每个节点不同
advert_int 1
authentication {
auth_type PASS
auth_pass 35f18af7190d51c9f7f78f37300a0cbd
}
virtual_ipaddress {
192.168.1.49 #浮动虚拟ip,确保没有被占用
}
track_script {
check_haproxy
}
}
###################################################################################
(3)重启并查看状态
sudo systemctl enable keepalived.service && sudo systemctl start keepalived.service && sudo systemctl status keepalived.service
ip address show ens33
2.安装配置haproxy(主节点和备用节点完全相同)
(1)安装
sudo apt install -y haproxy
(2)修改配置
sudo vim /etc/haproxy/haproxy.cfg
###################################################################################
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
mode tcp
bind *:16443
option tcplog
default_backend kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
mode tcp
balance roundrobin
server master1 192.168.1.50:6443 check
server master2 192.168.1.51:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
bind *:1080
stats auth admin:awesomePassword
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /admin?stats
###################################################################################
(3)重启并查看状态
sudo systemctl enable haproxy.service && sudo systemctl start haproxy.service && sudo systemctl status haproxy.service && sudo ss -lnt | grep -E "16443|1080"
3.安装kubelet
最好安装特定版本--》 安装特定版本kubernetes
4.初始化集群(master上)
使用kubeadm config print init-defaults > kubeadm-config.yaml 打印出默认配置,然后在根据自己的环境修改配置
以此文件内容为准---》: kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.4
apiServer:
controlPlaneEndpoint: "192.168.1.49:16443"
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
networking:
podSubnet: "192.168.0.0/16"
**执行初始化 **
sudo kubeadm init --config kubeadm-config.yaml
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get pods --all-namespaces
5.创建网络
创建网络
sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
``
或者文件下载到本地:
sudo kubectl apply -f rbac-kdd.yaml
sudo kubectl apply -f calico.yaml # 可以修改默认网络段192.168.0.0/16再执行
6.master加入集群
(1)将证书及配置文件复制到其它master机器
USER=root
CONTROL_PLANE_IPS="master2,master3"
for host in ${CONTROL_PLANE_IPS}; do
ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
done
(2)master加入集群
kubeadm join cluster.kube.com:16443 --token rhs7lq.hbnxtghe8176kbas --discovery-token-ca-cert-hash sha256:7dcd41bc338235780a4b200ee066e08392ea6f1bf0c25cd93c5295ff7c05512f --experimental-control-plane
7.node加入集群
kubeadm join 192.168.1.50:6443 --token 63wyoi.svz5o3snjvies9gm --discovery-token-ca-cert-hash sha256:717caa098d581b827bf129f73fc646403c77b937541539d5fdd580fe8c313b9f
增加私有仓库
vim /etc/docker/daemon.json
{ "insecure-registries":["192.168.1.60:5000"] }
启用防火墙规则
iptables -P FORWARD ACCEPT