Openshift3.11集群安装
安装环境
CentOS Linux release 7.6.1810 (Core)
集群信息
| 主机 | IP | 内存 | 硬盘大小 | 备注 |
|---|---|---|---|---|
| host-192-168-1-12 | 192.168.1.12 | 31G | 40G+500G | MASTER |
| host-192-168-1-13 | 192.168.1.13 | 31G | 40G+500G | INFRA |
| host-192-168-1-14 | 192.168.1.14 | 31G | 40G+500G | COMPUTER |
离线安装包准备
离线docker镜像准备
yum install docker -y
systemctl start docker; systemctl enable docker
docker pull docker.io/openshift/origin-node:v3.11
docker pull docker.io/openshift/origin-control-plane:v3.11
docker pull docker.io/openshift/origin-deployer:v3.11.0
docker pull docker.io/openshift/origin-haproxy-router:v3.11
docker pull docker.io/openshift/origin-pod:v3.11.0
docker pull docker.io/openshift/origin-web-console:v3.11
docker pull docker.io/openshift/origin-docker-registry:v3.11
docker pull docker.io/openshift/origin-metrics-server:v3.11
docker pull docker.io/openshift/origin-console:v3.11
docker pull docker.io/openshift/origin-metrics-heapster:v3.11
docker pull docker.io/openshift/origin-metrics-hawkular-metrics:v3.11
docker pull docker.io/openshift/origin-metrics-schema-installer:v3.11
docker pull docker.io/openshift/origin-metrics-cassandra:v3.11
docker pull docker.io/cockpit/kubernetes:latest
docker pull quay.io/coreos/cluster-monitoring-operator:v0.1.1
docker pull quay.io/coreos/prometheus-config-reloader:v0.23.2
docker pull quay.io/coreos/prometheus-operator:v0.23.2
docker pull docker.io/openshift/prometheus-alertmanager:v0.15.2
docker pull docker.io/openshift/prometheus-node-exporter:v0.16.0
docker pull docker.io/openshift/prometheus:v2.3.2
docker pull docker.io/grafana/grafana:5.2.1
docker pull quay.io/coreos/kube-rbac-proxy:v0.3.1
docker pull quay.io/coreos/etcd:v3.2.22
docker pull quay.io/coreos/kube-state-metrics:v1.3.1
docker pull docker.io/openshift/oauth-proxy:v1.1.0
docker pull quay.io/coreos/configmap-reload:v0.0.1
离线rpm包准备
origin-3.11.0-1.el7.git.0.62803d0.x86_64.rpm
origin-hyperkube-3.11.0-1.el7.git.0.62803d0.x86_64.rpm
origin-clients-3.11.0-1.el7.git.0.62803d0.x86_64.rpm
origin-node-3.11.0-1.el7.git.0.62803d0.x86_64.rpm
执行如下命名:
yum install -y origin-node-3.11.0 origin-hyperkube-3.11.0 origin-clients-3.11.0 conntrack-tools
# master安装
yum install -y origin-3.11.0
基础依赖包安装
yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct vim python-setuptools unzip tree docker –y
yum install atomic -y
yum install -y centos-release-openshift-origin311 ceph-common container-selinux docker epel extras python-docker
配置网络安全
关闭防火墙
sudo systemctl stop firewalld.service; sudo systemctl disable firewalld.service
配置iptables
cp /etc/sysconfig/iptables /etc/sysconfig/iptables.bak.$(date "+%Y%m%d%H%M%S");
sed -i '/.*--dport 22 -j ACCEPT.*/a\-A INPUT -p tcp -m state --state NEW -m tcp --dport 53 -j ACCEPT' /etc/sysconfig/iptables;
sed -i '/.*--dport 22 -j ACCEPT.*/a\-A INPUT -p udp -m state --state NEW -m udp --dport 53 -j ACCEPT' /etc/sysconfig/iptables;
sed -i '/.*--dport 22 -j ACCEPT.*/a\-A INPUT -p tcp -m state --state NEW -m tcp --dport 5000 -j ACCEPT' /etc/sysconfig/iptables;
sed -i '/.*--dport 22 -j ACCEPT.*/a\-A INPUT -p tcp -m state --state NEW -m tcp --dport 81 -j ACCEPT' /etc/sysconfig/iptables;
#在master节点允许 8443 for node join
sed -i '/.*--dport 22 -j ACCEPT.*/a\-A INPUT -p tcp -m state --state NEW -m tcp --dport 8443 -j ACCEPT ' /etc/sysconfig/iptables;
sed -i '/.*--dport 22 -j ACCEPT.*/a\-A INPUT -p tcp -m state --state NEW -m tcp --dport 8443 -j ACCEPT ' /etc/sysconfig/iptables;
systemctl restart iptables;systemctl enable iptables
先在一个节点上DNS配置
编辑/etc/hosts文件加入以下内容:
192.168.1.12 openshift1
192.168.1.13 openshift2
192.168.1.14 openshift3
主机hostname配置
注意: 此时DNS 配置需要与主机名保持一致
hostnamectl set-hostname openshift1/openshift2/openshift3
hostnamectl --pretty
hostnamectl --static
hostnamectl --transient
cat /etc/hostname
分别在节点上执行以上命令
设置时区、配置chrony时钟同步
Openshift默认使用ntp来进行时间同步。如果使用chrony来做时间同步,步骤如下:
-
(1)设置时区
查看时区: timedatectl
设置时区: timedatectl set-timezone Asia/Shanghai -
(2)开通chrony server的123端口的访问策略(iptables)
sudo iptables -I INPUT -p udp --dport 123 -j ACCEPT
sudo vi /etc/sysconfig/iptables,增加: -I INPUT -p udp --dport 123 -j ACCEPT
重启iptables。 -
(3)每台主机开启网络时间同步:
查看时间同步状态:timedatectl status
开启时间同步状态:sudo timedatectl set-ntp true# timedatectl status Local time: Mon 2019-08-19 11:20:52 CST Universal time: Mon 2019-08-19 03:20:52 UTC RTC time: Mon 2019-08-19 03:40:59 Time zone: Asia/Shanghai (CST, +0800) NTP enabled: yes NTP synchronized: yes RTC in local TZ: no DST active: n/a -
(4)配置chrony服务端与客户端
可以在集群内选择一台作为chrony服务器,其他节点从该server同步时间。
chrony一般情况下会已安装,rpm -aq|grep chronyChrony Server端配置:
sudo vi /etc/chrony.conf
server 10.1.234.164 iburst #指定本机IP或者上游的时间服务器
allow 10.1.0.0/16Chrony客户端配置: vi /etc/chrony.conf
server 10.1.234.164 iburst #指定上游的时间服务器启动chrony:
ansible -i dfhosts.cfg all -b -m shell -a "sudo systemctl start chronyd"
ansible -i dfhosts.cfg all -b -m shell -a "sudo systemctl enable chronyd"
ansible -i dfhosts.cfg all -b -m shell -a "chronyc sources -v" #查看chrony server源(同步的server的状态显示为^*)
210 Number of sources = 5
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* openshift1 3 7 377 97 +70us[ +154us] +/- 90ms
^? undefined.hostname.local> 0 10 0 - +0ns[ +0ns] +/- 0ns
^? 139.199.214.202 0 10 0 - +0ns[ +0ns] +/- 0ns
^? ntp7.flashdance.cx 0 10 0 - +0ns[ +0ns] +/- 0ns
^? amy.chl.la 0 10 0 - +0ns[ +0ns] +/- 0ns
节点间ansible 免密登录配置
ssh-keygen -t rsa
The key needs to be created with **-m PEM (ssh-keygen -m PEM)** flag if you
use a version of OpenSSH >= 7.8.
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:UqtSrQUgOe0WeA9rfqDSMn5U3J4LzQQ2Ks+hWUbDKMA root@ldap
The key's randomart image is:
+---[RSA 2048]----+
|+ ++. |
|.E+=== |
|. .+=== . |
| . =*o.* . |
| .O=o.B S |
|+oo+.o.X |
|.+. ..+ . |
| . . . . |
| . |
+----[SHA256]-----+
ssh-copy-id -i ~/.ssh/id_rsa.pub openshift1/openshift2/openshift3;
安装 openshift ansible
yum install -y ansible-2.6.14-1.el7
yum install -y openshift-ansible
配置ansible 文件
[root@openshift1 ~]# cat /etc/ansible/hosts
# Create an OSEv3 group that contains the masters, nodes, and etcd groups
[OSEv3:children]
masters
nodes
etcd
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root
#openshift_deployment_type=openshift-enterprise
openshift_deployment_type=origin
openshift_release="3.11"
openshift_image_tag=v3.11
openshift_pkg_version=-3.11.0
openshift_use_openshift_sdn=true
# If ansible_ssh_user is not root, ansible_become must be set to true
#ansible_become=true
#containerized=false
# default selectors for router and registry services
# openshift_router_selector='node-role.kubernetes.io/infra=true'
# openshift_registry_selector='node-role.kubernetes.io/infra=true'
# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
#openshift_master_default_subdomain=ai.com
openshift_disable_check=memory_availability,disk_availability,docker_image_availability
os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy'
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift1
openshift_master_cluster_public_hostname=openshift1
# false
ansible_service_broker_install=false
openshift_enable_service_catalog=false
template_service_broker_install=false
openshift_logging_install_logging=false
enable_excluders=false
# registry passwd
#oreg_url=10.1.236.77:5000/openshift3/ose-${component}:${version}
#oreg_url=10.1.236.77:5000/openshift/origin-${component}:${version}
#openshift_examples_modify_imagestreams=true
# docker config
#openshift_docker_additional_registries=10.1.236.77:5000
#openshift_docker_insecure_registries=10.1.236.77:5000
#openshift_docker_blocked_registries
openshift_docker_options="--log-driver json-file --log-opt max-size=1M --log-opt max-file=3"
# openshift_cluster_monitoring_operator_install=false
# openshift_metrics_install_metrics=true
# openshift_enable_unsupported_configurations=True
#openshift_logging_es_nodeselector='node-role.kubernetes.io/infra: "true"'
#openshift_logging_kibana_nodeselector='node-role.kubernetes.io/infra: "true"'
# host group for masters
[masters]
openshift1
# host group for etcd
[etcd]
openshift1
# host group for nodes, includes region info
[nodes]
openshift1 openshift_node_group_name='node-config-master'
openshift2 openshift_node_group_name='node-config-compute'
openshift3 openshift_node_group_name='node-config-compute'
openshift2 openshift_node_group_name='node-config-infra'
DNS 配置下发
ansible all -m copy -a "src=/etc/hosts dest=/etc/hosts "
启动docker
ansible all -a 'systemctl start docker';ansible all -a 'systemctl enable docker'
执行安装检查
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
执行安装
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml -vvv
安装完成
openshift1.jpg
创建用户名和密码
htpasswd -cb /etc/origin/master/htpasswd admin abc123
oc adm policy add-cluster-role-to-user cluster-admin admin
进入openshift WEB主页面:https://openshift1:8443
openshift2.jpg
openshift3.jpg
博客著作权归本作者所有,任何形式的转载都请联系作者获得授权并注明出处。