CentOS7单节点部署OpenStack-Pike(使用kol
环境说明
- 物理机是16GB内存的MacBook Pro
- 使用VIrtualBox创建的CentOS7虚拟机,8GB内存,40GB磁盘
- CentOS7使用CentOS 7 X64 1708发行版网易的源
- 两张网卡enp0s3(桥接)和enp0s8(Host Only),enp0s3的地址为192.168.0.25,enp0s8的地址为192.168.56.101
系统服务配置
启动NTP
systemctl enable ntpd.service && systemctl start ntpd.service && systemctl status ntpd.service
关闭libvirted服务器
systemctl stop libvirtd.service && systemctl disable libvirtd.service && systemctl status libvirtd.service
关闭防火墙
systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
修改主机名
vi /etc/hostname
kolla
vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.26.6 kolla kolla
安装docker
使用离线安装方式(原因:国内网络问题,下载国外源超时),离线安装代码库
yum install git
git clone https://github.com/yaoelvon/docker-installer.git
cd docker-installer/RPM-based/docker/
sh install.sh
配置国内镜像源
使用阿里云镜像源
mkdir -p /etc/docker
vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://7g5a4z30.mirror.aliyuncs.com"]
}
重启Docker服务
systemctl daemon-reload && systemctl enable docker && systemctl restart docker && systemctl status docker
检查镜像服务是否正常
docker run --rm hello-world
安装和配置Kolla-Ansible
安装依赖
yum install epel-release
yum install python-pip
pip install -U pip
yum install python-devel libffi-devel gcc openssl-devel libselinux-python
安装ansible
yum install ansible
# 在ansible配置文件中增加信息/etc/ansible/ansibel.cfg
[defaults]
host_key_checking=False
pipelining=True
forks=100
安装kolla-ansible
pip install kolla-ansible
拷贝globals.yml和passwords.yml文件到/etc/kolla目录
cp -r /root/venv/share/kolla-ansible/etc_examples/kolla /etc/kolla/
拷贝all-in-one和multinode inventory文件到当前文件夹
cp /root/venv/share/kolla-ansible/ansible/inventory/* .
配置单节点清单文件(目前只有一个节点)
将所有的localhost改为kolla
$ /opt/kolla/config/
$ vi all-in-one
...
[control]
#localhost ansible_connection=local
kolla
[network]
#localhost ansible_connection=local
kolla
[compute]
#localhost ansible_connection=local
kolla
[storage]
#localhost ansible_connection=local
kolla
[monitoring]
#localhost ansible_connection=local
kolla
[deployment]
#localhost ansible_connection=local
kolla
...
生成随机密码文件
kolla-genpwd
修改其中的admin密码,后面的Web页面会用到:
vi /etc/kolla/passwords.yml
...
keystone_admin_password: admin123456
...
修改全局配置
vi /etc/kolla/globals.yml
···
kolla_base_distro: "centos"
kolla_install_type: "source"
openstack_release: "pike"
kolla_internal_vip_address: "192.168.0.27"
network_interface: "enp0s3"
neutron_external_interface: "enp0s8"
enable_cinder: "yes"
enable_cinder_backend_lvm: "yes"
···
生成SSH Key,并授信本节点
$ ssh-keygen
$ ssh-copy-id -i ~/.ssh/id_rsa.pub root@killa
配置Nova虚拟化类型
由于是在虚拟机里,所以使用qemu,而不是kvm
$ mkdir -pv /etc/kolla/config/nova
$ vi /etc/kolla/config/nova/nova-compute.conf
[libvirt]
virt_type=qemu
cpu_mode = none
配置Kolla-Ansible的Docker选项
配置Docker共享挂载
mkdir -pv /etc/systemd/system/docker.service.d
vi /etc/systemd/system/docker.service.d/kolla.conf
[Service]
MountFlags=shared
重启Docker服务
$ systemctl daemon-reload && systemctl restart docker && systemctl status docker
部署OpenStack
检查配置是否正确
kolla-ansible -i ./all-in-one prechecks
拉取所有镜像
$ kolla-ansible pull -i ./all-in-one
开始部署
$ kolla-ansible -i ./all-in-one deploy
生成环境变量设置脚本
$ kolla-ansible post-deploy -i ./all-in-one
$ cat /etc/kolla/admin-openrc.sh
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin1234
export OS_AUTH_URL=http://192.168.0.27:35357/v3
export OS_INTERFACE=internal
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=RegionOne
export OS_AUTH_PLUGIN=password
如果想清理掉部署好的OpenStack,则执行如下命令
kolla-ansible destroy -i all-in-one --yes-i-really-really-mean-it
验证OpenStack
安装OpenStack命令行工具
$ pip install python-openstackclient
$ pip install python-neutronclient
$ which openstack
/usr/bin/openstack
设置环境变量
$ . /etc/kolla/admin-openrc.sh
编辑初始化脚本中的网络配置
$ vi /usr/share/kolla-ansible/init-runonce
...
EXT_NET_CIDR='192.168.56.0/24'
EXT_NET_RANGE='start=192.168.56.150,end=192.168.56.199'
EXT_NET_GATEWAY='192.168.56.1'
...
初始化基本的运行环境(镜像和网络等)
$ . /usr/share/kolla-ansible/init-runonce
创建虚拟机
创建并启动虚拟机:
$ openstack server create --image cirros --flavor m1.tiny --key-name mykey --nic net-id=f67756b6-06cd-450e-9079-58c769a9581e demo1
查看虚拟机信息:
$ openstack server show demo1
查看demo1的Web控制台URL:
$ openstack console url show demo1
申请一个"Floating IP"(也可以使用“--floating-ip-address”参数指定要申请的IP):
$ openstack floating ip create public1
把这个“Floating IP”添加给demo1虚拟机:
$ openstack server add floating ip demo1 192.168.56.157
查看demo1虚拟机:
$ openstack server list
查看已有的网络的NameSpace:
$ ip netns
验证网络是否正常:
$ ip netns exec qrouter-2ca6f6a6-dee3-461f-9d8b-3bbb14889b58 ping 192.168.56.157
$ ip netns exec qdhcp-f67756b6-06cd-450e-9079-58c769a9581e ping 10.0.0.11
$ ip netns exec qrouter-2ca6f6a6-dee3-461f-9d8b-3bbb14889b58 ssh cirros@192.168.56.157
访问http://192.168.0.25可以登录web页面,账户是admin,密码时admin123456.
问题记录
1.如果在安装kolla-ansible时出现pip库版本冲突
使用virtualenv方式安装kolla-ansible
2.prechecks时报错:Hostname has to resolve to IP address of api_interface
解决方案:
$ vi /etc/hosts
增加
192.168.0.25 kolla kolla
3.prechecks时报错:no test named 'equalto'
这是一个共性问题,prechecks时,提示很多库不存在、版本不对等等问题都是相同问题。
详细错误信息如下:
TASK [neutron : Checking tenant network types] ****************************************************************************************
fatal: [controller]: FAILED! => {"msg": "The conditional check 'item not in type_drivers' failed. The error was: An unhandled exception occurred while templating '{{ neutron_type_drivers.replace(' ', '').split(',') | reject('equalto', '') | list }}'. Error was a <class 'jinja2.exceptions.TemplateRuntimeError'>, original message: no test named 'equalto'\n\nThe error appears to have been in '/root/venv/share/kolla-ansible/ansible/roles/neutron/tasks/precheck.yml': line 41, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Checking tenant network types\n ^ here\n"}
处理过程:
由于我是在virtualenv环境下安装的kolla-ansible库,所以在虚拟环境下运行precheck,当报上述错误时,我就去网上找原因,都说是jinja2的版本问题,升级到2.8就可以解决。但是,我检查本地版本发现是2.10。
.....折腾好久.....,尝试进入源码将equalto部分去掉,发现改一个不够,有很多,而且很多是无法修改的。。
隔日:发现非virtualenv环境下,本机的jinja2版本为2.7.2。没想到在virutalenv情况下,代码会调用系统的jinja2库,升级本机的jinja2版本解决。
再次深入发现,原因是ansible部署机上安装的库,不会被目标机所使用。如果kolla所在主机和目标机是两台机器,那么在kolla主机上安装的库,无法用于目标机。我这边所处的情况是,kolla主机和目标机是一台主机,但kolla-ansbile是安装在virtualenv环境中的,虽然我在virtualenv中安装了所有所需的库,但是使用ansible部署openstack时,作为目标机情况下,只能使用默认安装目录下的库。
解决方案:
1.使用CentOS 7 X64 1708发行版,不使用virtualenv方式,直接安装kolla-ansible;
2.通过设置目标机的一些变量来指定virtualenv目录(未校验),详细见:参考 8
4.deploy时,总是提示某个端口被占用
解决方案:
将VIP address设置为网卡1同网段的另一个ip地址,这里设置的是192.168.0.27
5.deploy时报错 Please enable at least one backend when enabling Cinder
解决方案:
使能一个Cinder backend服务,从下面源码中可以看到服务列表,我这里将/etc/kolla/global.yml文件中的enable_cinder_backend_lvm
设置为yes
- name: Checking at least one valid backend is enabled for Cinder
run_once: True
local_action: fail msg="Please enable at least one backend when enabling Cinder"
when:
- not enable_cinder_backend_hnas_iscsi | bool
- not enable_cinder_backend_hnas_nfs | bool
- not enable_cinder_backend_iscsi | bool
- not enable_cinder_backend_lvm | bool
- not enable_cinder_backend_nfs | bool
- not cinder_backend_ceph | bool
- not cinder_backend_vmwarevc_vmdk | bool
- not enable_cinder_backend_zfssa_iscsi | bool
6.执行precheck时报错Cannot process volume group cinder-volumes
解决方案:手工建立卷
$ vgdisplay
--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <29.00 GiB
PE Size 4.00 MiB
Total PE 7423
Alloc PE / Size 7423 / <29.00 GiB
Free PE / Size 0 / 0
VG UUID yPO02e-lgFP-83Xd-ZQKB-NHJH-DQ1J-hoRAmp
$ dd if=/dev/zero of=./disk.img count=4096 bs=1MB
记录了4096+0 的读入
记录了4096+0 的写出
4096000000字节(4.1 GB)已复制,3.44225 秒,1.2 GB/秒
$ losetup -f
/dev/loop0
$ losetup /dev/loop0 disk.img
$ pvcreate /dev/loop0
Physical volume "/dev/loop0" successfully created.
$ vgcreate cinder-volumes /dev/loop0
Volume group "cinder-volumes" successfully created
7.执行deploy报错:TASK [neutron : Checking if 'MountFlags' for docker service is set to 'shared']
解决方案:
配置Docker共享挂载
$ mkdir -pv /etc/systemd/system/docker.service.d
$ vi /etc/systemd/system/docker.service.d/kolla.conf
[Service]
MountFlags=shared
重启Docker服务后生效:
$ systemctl daemon-reload && systemctl restart docker && systemctl status docker
8.执行deploy报错:iscsid: Can not bind IPC socket
原因:
nova的iscsid容器启动失败,因为compute主机上的iscsid服务占用了IPC socket
解决方案:
yum remove iscsi-initiator-utils
进一步
$ sudo systemctl stop iscsid.socket iscsiuio.socket iscsid.service
$ sudo systemctl disable iscsid.socket iscsiuio.socket iscsid.service
9.创建VM报错:No valid host was found. There are not enough hosts available.
nova-scheduler日志所在位置 /var/lib/docker/volumes/kolla_logs/_data/nova-scheduler
查看日志
2018-05-02 11:26:52.144 5 INFO nova.scheduler.filters.retry_filter [req-5b0d340c-c28b-4189-8dfd-7a7133ad10c1 36efcd30d08c4aeda9d5b28d9b1e8ffd a99799886ef648dc891f2df8ae4867af - default default] Host [u'kolla', u'kolla'] fails. Previously tried hosts: [[u'kolla', u'kolla']]
2018-05-02 11:26:52.145 5 INFO nova.filters [req-5b0d340c-c28b-4189-8dfd-7a7133ad10c1 36efcd30d08c4aeda9d5b28d9b1e8ffd a99799886ef648dc891f2df8ae4867af - default default] Filter RetryFilter returned 0 hosts
2018-05-02 11:26:52.145 5 INFO nova.filters [req-5b0d340c-c28b-4189-8dfd-7a7133ad10c1 36efcd30d08c4aeda9d5b28d9b1e8ffd a99799886ef648dc891f2df8ae4867af - default default] Filtering removed all hosts for the request with instance ID '49910620-0093-4eae-9f53-003709acd1c3'. Filter results: ['RetryFilter: (start: 1, end: 0)']
可以看出,通过调度过滤,没有找到可用的host。
原因是:
openstack集群是通过virtualbox部署的,而virtualbox已经使用了kvm技术,所以需要设置nova的虚拟化为qemu,而不是kvm
解决方案:
$ mkdir -pv /etc/kolla/config/nova
$ vi /etc/kolla/config/nova/nova-compute.conf
[libvirt]
virt_type=qemu
cpu_mode = none
2018年5月02日
yaoel
参考
1.kolla ansible deploy openstack
2使用Kolla-Ansible在CentOS 7单节点上部署OpenStack Pike
3.Kolla AIO deploy fail: Hostname has to resolve IP address ?
4.no test named 'equalto'
5.no-valid-host-was-found-there-are-not-enough-hosts-available
6.kolla-ansible多节点安装openstack
7.Nova-iscsid Container Fails to Start
8.virtual environment