openstackOpenStack

OpenStack高可用服务部署(Pike版)

2018-11-12  本文已影响20人  Murray66

背景

参考

https://www.cnblogs.com/netonline/p/9201049.html
高可用的部分是参照该文博主一步步做的,受益匪浅,推荐学习

解决方案

由于senlin组件提供Centos 7的安装包,准备以Centos 7系统部署一台控制节点,形成控制节点集群。在Centos 7控制节点上部署senlin组件,并将一些组件部署为高可用形式。

本次部署高可用服务也属摸着石头过河,主要是为了集成senlin服务才心生此计,特在此记录学习。

部署过程

Step1 设置主机网络

在OpenStack 所有集群节点 中设置主机网络。

vim /etc/hosts

原先的控制节点主机名为controller,新部署的控制节点主机名为controller1。

由于要进行控制节点集群的负载均衡,不打算把负载均衡软件单独安装在一台服务器上,考虑把负载均衡软件安装到控制节点上,因此需要产生一个虚拟ip。虚拟ip的虚拟主机名为controllerv。

#controller
192.168.0.200 controller
192.168.0.208 controller1
192.168.0.209 controllerv

在controller1上产生虚拟ip,Centos设置虚拟ip方式如下:

编辑网卡文件:

vim /etc/sysconfig/network-scripts/ifcfg-eth0

添加额外的IP地址:

IPADDR1=192.168.0.209

保存退出,重启网络服务使更改生效。检查是否已经加入ip地址:ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:c6:1a:5e brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.208/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.0.209/24 brd 192.168.0.255 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fec6:1a5e/64 scope link
       valid_lft forever preferred_lft forever

eth0网卡已经有两个ip地址了。

Step 2 安装haproxy并配置

利用Haproxy软件进行负载均衡。

1.安装haproxy

在Ubuntu 16.04系统的controller中安装haproxy:

apt install haproxy -y

在Centos 7系统的controller1中安装haproxy:

yum install haproxy -y

2.配置haproxy.cfg

在controller和controller1中修改配置文件

vim /etc/haproxy/haproxy.cfg

global
  chroot  /var/lib/haproxy
  daemon
  group  haproxy
  user  haproxy
  maxconn  4000
  pidfile  /var/run/haproxy.pid

defaults
  log  global
  maxconn  4000
  option  redispatch
  retries  3
  timeout  http-request 10s
  timeout  queue 1m
  timeout  connect 10s
  timeout  client 1m
  timeout  server 1m
  timeout  check 10s

listen stats
  bind 0.0.0.0:1080
  mode http
  stats enable
  stats uri /
  stats realm OpenStack\ Haproxy
  stats auth admin:admin
  stats  refresh 30s
  stats  show-node
  stats  show-legends
  stats  hide-version

listen dashboard_cluster
  bind 192.168.0.209:80
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller 192.168.0.200:80 check inter 2000 rise 2 fall 5
  server controller1 192.168.0.208:80 check inter 2000 rise 2 fall 5

listen galera_cluster
  bind 192.168.0.209:3306
  balance  source
  mode    tcp
  server controller 192.168.0.200:3306 check inter 2000 rise 2 fall 5

listen glance_api_cluster
  bind 192.168.0.209:9292
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller 192.168.0.200:9292 check inter 2000 rise 2 fall 5
  
listen glance_registry_cluster
  bind 192.168.0.209:9191
  balance  source
  option  tcpka
  option  tcplog
  server controller 192.168.0.200:9191 check inter 2000 rise 2 fall 5

listen keystone_admin_cluster
  bind 192.168.0.209:35357
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller 192.168.0.200:35357 check inter 2000 rise 2 fall 5
  server controller1 192.168.0.208:35357 check inter 2000 rise 2 fall 5

listen keystone_public_cluster
  bind 192.168.0.209:5000
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller 192.168.0.200:5000 check inter 2000 rise 2 fall 5
  server controller1 192.168.0.208:5000 check inter 2000 rise 2 fall 5
  
listen senlin_api_cluster
  bind 192.168.0.209:8777
  balance source
  option tcpka
  option httpchk
  option tcplog
  server controller1 192.168.0.208:8777 check inter 2000 rise 2 fall 5

listen nova_compute_api_cluster
  bind 192.168.0.209:8774
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller 192.168.0.200:8774 check inter 2000 rise 2 fall 5
  server controller1 192.168.0.208:8774 check inter 2000 rise 2 fall 5

listen nova_placement_cluster
  bind 192.168.0.209:8778
  balance  source
  option  tcpka
  option  tcplog
  server controller 192.168.0.200:8778 check inter 2000 rise 2 fall 5
  server controller1 192.168.0.208:8778 check inter 2000 rise 2 fall 5

listen nova_metadata_api_cluster
  bind 192.168.0.209:8775
  balance  source
  option  tcpka
  option  tcplog
  server controller 192.168.0.200:8775 check inter 2000 rise 2 fall 5
  server controller1 192.168.0.208:8775 check inter 2000 rise 2 fall 5

listen nova_vncproxy_cluster
  bind 192.168.0.209:6080
  balance  source
  option  tcpka
  option  tcplog
  server controller 192.168.0.200:6080 check inter 2000 rise 2 fall 5
  server controller1 192.168.0.208:6080 check inter 2000 rise 2 fall 5

listen neutron_api_cluster
  bind 192.168.0.209:9696
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller 192.168.0.200:9696 check inter 2000 rise 2 fall 5
  server controller1 192.168.0.208:9696 check inter 2000 rise 2 fall 5

从配置文件中可以看到,把horizon组件,keystone组件,nova组件和neutron组件部署成了高可用服务,而mariadb数据库、RabbitMQ消息队列和Memcache缓存并没有部署成高可用,glance组件也没有部署成高可用,senlin组件则只有一个server,为controller1(因为只有controller1上部署了senlin组件)

3.配置内核参数

在Centos 7的controller1中配置内核参数:

vim /etc/sysctl.conf

添加如下,其中:

net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1

4.启动服务

在Ubuntu 16.04的controller中:

/usr/sbin/haproxy -f /etc/haproxy/haproxy

在Centos 7的controller1中:

systemctl enable haproxy
systemctl start haproxy

其中,如果在Ubuntu中提示某服务无法绑定某某端口,可能是没有检测到虚拟ip的问题,也可以等具体的服务部署启动后重启haproxy看看。

访问http://192.168.0.209:1080/

haproxy.png

这是各组件服务安装启动后的情况,在组件未安装启动时,每一节的controller和controller1两栏应该是红的,组件服务正常运行后才会变绿。

Step 3 安装基础服务

Ubuntu系统的controller中,服务已经安装,由于不考虑部署成mariadb数据库集群、RabbitMQ消息队列集群、Memcache缓存集群,所以只在Centos 7的controller1中安装ntp时间同步服务和OpenStack软件包。

1.ntp服务

yum install chrony -y

vim /etc/chrony.conf

#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server cn.ntp.org.cn iburst
allow 192.168.0.0/24

启动服务:

systemctl enable chronyd.service
systemctl restart chronyd.service

查看状态:

chronyc sources -v

2.OpenStack软件包

yum install centos-release-openstack-pike -y
yum upgrade -y
yum install python-openstackclient -y
yum install openstack-selinux -y

3.关闭firewall和selinux

关闭firewall:

systemctl stop firewalld.service #停止firewall
systemctl disable firewalld.service #禁止firewall开机启动
firewall-cmd --state #查看默认防火墙状态(关闭后显示notrunning,开启后显示running)

关闭selinux:

vim /etc/selinux/config

SELINUX=disabled

重启使配置生效:

reboot

Step 4 部署Keystone集群

由于Ubuntu的controller中已经安装了keystone,创建了数据库,所以在Centos的controller1中不用执行此步骤

1.安装软件包

yum install openstack-keystone httpd mod_wsgi mod_ssl -y

2.配置keystone.conf

vim /etc/keystone/keystone.conf

[database]
# ...
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controllerv/keystone

[token]
# ...
provider = fernet

注意database部分,由于在haproxy配置文件中,把数据库服务配置了负载均衡(实际只有192.168.0.200 即controller一个server),但访问的ip是虚拟ip,所以connection部分填写的是controllerv而不是controller

同理,之前的controller节点中keystone中database部分要修改:

[database]
# connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controllerv/keystone
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controllerv/keystone

这里要解释一下为何数据库服务没有形成负载均衡,因为我在Ubuntu和Centos系统上配置mariadb的galera集群失败了,由于不想耽误senlin组件的安装,就暂时没有再部署数据库集群,但haproxy配置文件里还保留了虚拟ip入口。

如果以后不打算部署数据库集群,可以不在haproxy配置文件中设置数据库负载均衡,所有控制节点和其他计算或存储节点的数据库连接均连接到指定数据库上。

3.同步Keystone数据库

在controller或controller1任意节点执行:

su -s /bin/sh -c "keystone-manage db_sync" keystone

4.初始化fernet秘钥

之前controller节点已经初始化了fernet秘钥,所以不必执行以下:

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

而是由controller向controller1同步秘钥:

scp -r /etc/keystone/fernet-keys/ /etc/keystone/credential-keys/ root@192.168.0.208:/etc/keystone/
scp -r /etc/keystone/fernet-keys/ /etc/keystone/credential-keys/ root@192.168.0.208:/etc/keystone/

对controller1上秘钥改变权限:

chown keystone:keystone /etc/keystone/credential-keys/ -R
chown keystone:keystone /etc/keystone/fernet-keys/ -R

5.配置httpd.conf

vim /etc/httpd/conf/httpd.conf

ServerName controller1
Listen 192.168.0.208:80

6.配置wsgi-keystone.conf

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

修改ip:

sed -i "s/Listen\ 5000/Listen\ 192.168.0.208:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s/Listen\ 35357/Listen\ 192.168.0.208:35357/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s/*:5000/192.168.0.208:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s/*:35357/192.168.0.208:35357/g" /etc/httpd/conf.d/wsgi-keystone.conf

7.认证引导

之前认证引导的时候,执行的是:

keystone-manage bootstrap --bootstrap-password admin_pass \
  --bootstrap-admin-url http://controller:35357/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne 

创建的keystone服务端点如下:

. admin-openrc
openstack endpoint list | grep keystone
| 3c46406f65924f7aab294c0324026851 | RegionOne | keystone     | identity       | True    | public    | http://controller:5000/v3/              |
| 4aef4c4504ac4ca6b30e847baa0439a6 | RegionOne | keystone     | identity       | True    | internal  | http://controller:5000/v3/              |
| 6945c1314da546efb4dddc819ef0cc19 | RegionOne | keystone     | identity       | True    | admin     | http://controller:35357/v3/             |

而启用负载均衡后,入口已经由原来的http://controller:***替换为http://controllerv:***,所以需要重建端点,先删除端点,再认证引导:

openstack endpoint delete 3c46406f65924f7aab294c0324026851
openstack endpoint delete 4aef4c4504ac4ca6b30e847baa0439a6
openstack endpoint delete 6945c1314da546efb4dddc819ef0cc19
keystone-manage bootstrap --bootstrap-password admin_pass \
  --bootstrap-admin-url http://controllerv:35357/v3/ \
  --bootstrap-internal-url http://controllerv:5000/v3/ \
  --bootstrap-public-url http://controllerv:5000/v3/ \
  --bootstrap-region-id RegionOne 

8.启动服务

在controller1中:

systemctl enable httpd.service
systemctl restart httpd.service

在controller中:

service apache2 restart

9.验证服务

在controller和controller1执行验证:

. admin-openrc
openstack --os-auth-url http://controllerv:35357/v3 \
> --os-project-domain-name Default --os-user-domain-name Default \
> --os-project-name admin --os-username admin token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2018-11-12T04:44:19+0000                                                                                                                                                                |
| id         | gAAAAABb6PcTIBzjFGvpQrIXWeEirC__6vs-KxCPFnpHYgD5gSRKvjCbsvThRRmCq2HBd5IKUZAXTVdiFu-JwhzSK47lYVci5KWb-Q6eYprlorndSRTjSn-GY3cgAB3EaUeYvhXuy6Qh0qg4QI90B6VgK-Tns_omkPMRnQaTCpzCN2O-5CuX5B8 |
| project_id | f30a92c42d264a79ba5744b084346a2d                                                                                                                                                        |
| user_id    | 8668aa3368c0427c98b1d1824a36f720                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Step 5 部署Nova控制节点集群

这里跳过了部署Glance,因为Glance镜像服务并没有部署成高可用集群,沿用之前controller中部署的glance服务,至于为什么不部署glance集群之后会说。

1.重建端口

和上述keystone类似,由于nova组件采用了负载均衡,服务入口由原来的http://controller:***替换为http://controllerv:***,所以需要重建端点,先删除端点,再重新建立。

查看端点:

. admin-openrc
openstack endpoint list | grep nova
| 98b11f1d9e224bb7bd5b3f66502dbc4b | RegionOne | nova         | compute        | True    | public    | http://controller:8774/v2.1             |
| b0852fcaac60433a9091f1e9813007f8 | RegionOne | nova         | compute        | True    | admin     | http://controller:8774/v2.1             |
| da4362132d3d4160976959c1325255b2 | RegionOne | nova         | compute        | True    | internal  | http://controller:8774/v2.1             |

openstack endpoint list | grep placement
| 37679f9ddcc04a6cabc6fc78388d49d1 | RegionOne | placement    | placement      | True    | admin     | http://controller:8778                  |
| 72bfa227e77e40cca4f8bcd15547eedb | RegionOne | placement    | placement      | True    | internal  | http://controller:8778                  |
| fc77bed693a54a6c81f7757c3857db3a | RegionOne | placement    | placement      | True    | public    | http://controller:8778                  |

删除端点:

openstack endpoint delete 98b11f1d9e224bb7bd5b3f66502dbc4b
……
//依次删除nova和placement端点

建立端点:

openstack endpoint create --region RegionOne compute public http://controllerv:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controllerv:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controllerv:8774/v2.1

openstack endpoint create --region RegionOne placement public http://controllerv:8778
openstack endpoint create --region RegionOne placement internal http://controllerv:8778
openstack endpoint create --region RegionOne placement admin http://controllerv:8778

2.安装软件包

在controller1中安装软件包:

yum install openstack-nova-api openstack-nova-conductor \
   openstack-nova-console openstack-nova-novncproxy \
   openstack-nova-scheduler openstack-nova-placement-api -y

3.修改配置

在controller和controller1节点中修改配置:

注意my_ip的不同
注意其中服务地址中controllercontroller1的不同,根据haproxy配置文件调整

此为nova配置文件最终配置,包括了neutron、cinder部分的配置,只为了展示设置负载均衡后对不同服务地址的修改,启动nova服务时可先不配置neutron和cinder部分

vim /etc/nova/nova.conf

[DEFAULT]
state_path = /var/lib/nova
block_device_allocate_retries=600
block_device_allocate_retries_interval=10
my_ip=192.168.0.208
use_neutron=true
firewall_driver=nova.virt.libvirt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
osapi_compute_listen=$my_ip
osapi_compute_listen_port=8774
metadata_listen=$my_ip
metadata_listen_port=8775
transport_url=rabbit://openstack:rabbitmq_pass@controller
[api]
auth_strategy=keystone
[api_database]
connection=mysql+pymysql://nova:nova_dbpass@controllerv/nova_api
[barbican]
[cache]
[cells]
enable=False
[cinder]
os_region_name= RegionOne
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection=mysql+pymysql://nova:nova_dbpass@controllerv/nova
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers=http://controllerv:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://controllerv:5000
auth_url = http://controllerv:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova_pass
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://controllerv:9696
auth_url = http://controllerv:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron_pass
service_metadata_proxy = true
metadata_proxy_shared_secret = metadata_pass
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controllerv:35357/v3
username = placement
password = placement_pass
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
server_listen=$my_ip
server_proxyclient_address=$my_ip
[trusted_computing]
[upgrade_levels]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
vncserver_listen=$my_ip
vncserver_proxyclient_address=$my_ip
novncproxy_base_url=http://$my_ip:6080/vnc_auto.html
novncproxy_host=$my_ip
novncproxy_port=6080
[workarounds]
[wsgi]
[xenserver]
block_device_creation_timeout=60
[xvp]

在controller1上修改00-nova-placement-api.conf文件:

sed -i "s/Listen\ 8778/Listen\ 192.168.0.208:8778/g" /etc/httpd/conf.d/00-nova-placement-api.conf
sed -i "s/*:8778/192.168.0.208:8778/g" /etc/httpd/conf.d/00-nova-placement-api.conf

vim /etc/httpd/conf.d/00-nova-placement-api.conf

添加:

<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>

4.同步nova数据库

在controller或controller1执行:

su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova

5.启动服务

在controller上:

service nova-api restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart

在controller1上:

systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service

systemctl restart openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \ 
openstack-nova-conductor.service \
openstack-nova-novncproxy.service

Step 6 部署Neutron控制节点集群

1.重建端点

和上述nova类似,neutron组件采用了负载均衡,服务入口由原来的http://controller:***替换为http://controllerv:***,所以需要重建端点,先删除端点,再重新建立。

查看端点:

. admin-openrc
openstack endpoint list | grep neutron
| 11e2438f264d4318a56ac3faea1d476d | RegionOne | neutron      | network        | True    | public    | http://controller:9696                  |
| c45bb9cfefc846b4b2c8d30f004b0fda | RegionOne | neutron      | network        | True    | admin     | http://controller:9696                  |
| e2f7f35905dc43f29d2cbf95851073e9 | RegionOne | neutron      | network        | True    | internal  | http://controller:9696

删除端点:

openstack endpoint delete 11e2438f264d4318a56ac3faea1d476d
……
//依次删除neutron端点

重建端点:

openstack endpoint create --region RegionTest network public http://controllerv:9696
openstack endpoint create --region RegionTest network internal http://controllerv:9696
openstack endpoint create --region RegionTest network admin http://controllerv:9696

2.安装软件包

在controller1中安装软件包:

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset -y

3.配置neutron.conf

采用self-service网络架构

在controller和controller1中修改配置文件:

vim /etc/neutron/neutron.conf

注意bind_host的配置,根据节点设置不同ip

[DEFAULT]
core_plugin = ml2
bind_host = 192.168.0.208
auth_strategy = keystone
service_plugins = router
allow_overlapping_ips = true
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
transport_url = rabbit://openstack:rabbitmq_pass@controller
[agent]
[cors]
[database]
connection = mysql+pymysql://neutron:neutron_dbpass@controllerv/neutron
[keystone_authtoken]
auth_uri = http://controllerv:5000
auth_url = http://controllerv:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron_pass
[matchmaker_redis]
[nova]
auth_url = http://controllerv:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova_pass
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]

4.配置ml2_conf.ini

vim /etc/neutron/plugins/ml2/ml2_conf.ini

[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_geneve]
vni_ranges = 1:1000
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
[securitygroup]
enable_ipset = true

5.配置linuxbridge_agent.ini

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:eth0
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
local_ip = 192.168.0.208
l2_population = true

其中,Centos的controller1需要配置内核参数:

vim /etc/sysctl.conf

添加如下,其中:

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

6.配置l3_agent.ini

vim /etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = linuxbridge
[agent]
[ovs]

7.配置dhcp_agent.ini

vim /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[agent]
[ovs]

8.配置metadata_agent.ini

vim /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_host = controllerv
metadata_proxy_shared_secret = metadata_secret
[agent]
[cache]

9.配置nova.conf

vim /etc/nova/nova.conf

上一步中已经出现过

[neutron]
url = http://controllerv:9696
auth_url = http://controllerv:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron_pass
service_metadata_proxy = true
metadata_proxy_shared_secret = metadata_pass

10.同步neutron数据库

在controller1中执行:

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

在controller或controller1中执行:

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

11.启动服务

重启nova服务:

controller1中

systemctl restart openstack-nova-api.service

controller中

service nova-api restart

启动neutron服务:

controller1中:

systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service neutron-l3-agent.service
systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service neutron-l3-agent.service

controller中:

service neutron-server restart
service neutron-linuxbridge-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service neutron-l3-agent restart

Step 7 其他服务

在本例中,只部署了keystone、nova、neutron、horizon组件的服务负载均衡,horizon里没有特别需要注意的配置,按照官网安装文档进行即可。其他诸如cinder组件并未部署在controller1节点上,只是在controller1节点上多部署了一个senlin组件,部署步骤详见另一篇https://www.jianshu.com/p/db4e8858ef74

而在计算节点和存储节点中,需要注意的是,keystone、nova、neutron组件的服务访问入口已经改变,需要在配置文件中把原来的 controller 替换为虚拟ip的hostname controllerv,其他没部署成负载均衡的组件服务访问入口则无需改变。

一些问题

1.控制面板访问

在Ubuntu系统中,horizon的访问地址是 http://controller/horizon,而Centos系统中,horizon的访问地址是 http://controller1/dashboard。因此在使用负载均衡入口 http://controllerv/dashboardhttp://controllerv/horizon 访问时,可能会无法访问。那是因为使用 http://controllerv/dashboard 访问时,负载均衡把请求分发到Ubuntu系统的控制节点,而使用 http://controllerv/horizon,负载均衡把请求分发到Centos系统的控制节点。此时,直接访问http://controller/horizonhttp://controller1/dashboard即可。

2.镜像服务

之前说了,没有给glance部署成高可用集群,那是因为没有部署配套的镜像存储方案,如果把镜像服务部署成负载均衡形式,那么假设上传一个镜像,该请求被分发给controller节点,镜像存储在controller节点的/var/lib/glance/images/目录下。这时创建虚拟机,请求被分发给controller1节点,controller1节点去自己的/var/lib/glance/images/目录下寻找镜像,却没有找到,就会报错。那是因为镜像存储在controller中,controller1中并没有镜像。

3.部署mariadb数据库集群

如何在ubuntu和centos系统间部署mariadb集群,部署失败中到底出了什么问题,是什么原因导致失败,还需要解决。

上一篇下一篇

猜你喜欢

热点阅读