阿里云上实战openshift2-openshift集群配置
1.master节点上创建用户
htpasswd -b /etc/origin/master/htpasswd
2.master节点上授予指定用户OpenShift集群管理员权限
oc login -u system:admin
ocadm policy add-cluster-role-to-user cluster-admin
3.访问OpenShift WebConsole
1)若master是阿里云ECS,需在阿里云服务器管理控制台上,对master ECS进行安全组配置,加上允许外部访问8443端口
2)master节点上配置并重启iptables
vi /etc/sysconfig/iptables
加上下面行:
-AINPUT -p tcp -m state --state NEW -m tcp --dport 8443 -j ACCEPT
service iptables restart
3)由于集群的主机名(master.honsen.com)是集群内网主机名,访问Web Console前需配置你个人电脑的C:\Windows\System32\drivers\etc\hosts:
master.honsen.com
4)浏览器上访问:https://master.honsen.com:8443
4.检查OpenShift集群的基础支撑功能是否均OK
1)管理员用户登录OpenShift Web Console, 切换至default project
【注】router以及openshift内置的s2i registry部署在defaultproject
左边选Overview, 查看docker-registry
deployment是否error
若是error状态, Edit
YAML, 查看nodeSelector, 并重新Deploy
2)管理员用户登录OpenShift Web Console, 切换至kube-service-catalog project
【注】apiserver以及controller-manager部署在kube-service-catalogproject
左边选Overview, 查看controller-manager pod是否Crash Loop Back-off
#以下是解决方法, seehttps://github.com/openshift/openshift-ansible/issues/7800
远程登录master节点主机
cd /root/openshift-ansible-openshift-ansible-3.7.31-1
vi roles/openshift_service_catalog/templates/controller_manager.j2
items:
- key: tls.crt
path: apiserver.crt
下面加上以下两行:
- key: tls.key
path: apiserver.key
ansible-playbook ~/openshift-ansible-openshift-ansible-3.7.31-1/playbooks/byo/openshift-cluster/service-catalog.yml
5.调整节点的label, 确保两个以上的node节点上有routerpod
1)远程登录master节点主机
#查看节点label
oc get nodes --show-labels
#修改label为region=infra的node的label为region=primary
infra=true:
oc label node node2.honsen.com region=primary infra=true --overwrite
2)远程登录修改过label的节点主机
#重启node
systemctl restart origin-node
3)管理员用户登录OpenShift WebConsole
#切换至default project, 选择docker-registry deployment,Edit YAML
修改nodeSelector为:
nodeSelector:
infra: 'true'
#切换至default project, 选择router deployment,Edit YAML
删除以下两行内容:
nodeSelector:
region: infra
#左边选择Monitoring, 选择Deployments router, 然后将其scale up到2
返回Monitoring,查看是否每个node节点上都有1个router pod