OpenStack Mitaka(openstack最新版安装)安装
OpenStack Mitaka(openstack最新版安装)安装前言:openstack的部署非常简单,简单的前提建立在扎实的理论功底,本人一直觉得,玩技术一定是理论指导实践,网上遍布个种搭建方法都可以实现一个基本的私有云环境,但是诸位可曾发现,很多配置都是重复的,为何重复?到底什么位置该不该配?具体配置什么参数?很多作者本人都搞不清楚,今天本人就是要在这里正本清源(因为你不理解所以你会有冗余的配置,说白了,啥配置啥意思你根本没闹明白)。如有不解可邮件联系我:egonlin4573@gmail.com
介绍:本次案列为基本的三节点部署,集群案列后期有时间再整理一:网络(本次实验没有做Cinder节点): 1.管理网络:172.16.209.0/24 2.数据网络:1.1.1.0/24 http://images2015.cnblogs.com/blog/885885/201701/885885-20170107110345284-2133445539.png二:操作系统:CentOS Linux release 7.2.1511 (Core)
三:内核:3.10.0-327.el7.x86_64
四:openstack版本mitaka
效果图:
http://images2015.cnblogs.com/blog/885885/201701/885885-20170107180933394-55283219.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107180823394-905474301.png http://images2015.cnblogs.com/blog/885885/201701/885885-20170107181215300-1382674069.png
OpenStack mitaka部署约定:0.以下配置是在原有配置文件上找相关项进行修改或添加1.在修改配置的时候,切勿在某条配置后加上注释,可以在配置的上面或者下面加注释2.相关配置一定是在标题后追加,不要在原有注释的基础上修改
PART1:环境准备一:1:每台机器设置固定ip,每台机器添加hosts文件解析,为每台机器设置主机名,关闭firewalld,selinux/etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6172.16.209.115 controller01172.16.209.117 compute01172.16.209.119 network02
其中 network02三块网卡,IT论坛的两块
2.每台机器配置yum源,可不配置,,使用默认的CentOS reponame=mitaka repobaseurl=http://172.16.209.100/mitaka-rpms/enabled=1gpgcheck=0
3.每台机器yum makecache && yum install vim net-tools -y&& yum update -y
4.时间服务部署
所有节点:yum install chrony -y控制节点:修改配置:/etc/chrony.confserver ntp.staging.kycloud.lan iburstallow 管理网络网段ip/24 启服务:systemctl enable chronyd.servicesystemctl start chronyd.service
其余节点:修改配置:/etc/chrony.confserver 控制节点ip iburst 启服务systemctl enable chronyd.servicesystemctl start chronyd.service
时区不是Asia/Shanghai需要改时区:# timedatectl set-local-rtc 1 # 将硬件时钟调整为与本地时钟一致, 0 为设置为 UTC 时间# timedatectl set-timezone Asia/Shanghai # 设置系统时区为上海其实不考虑各个发行版的差异化, 从更底层出发的话, 修改时间时区比想象中要简单:# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
验证:每台机器执行:chronyc sources在S那一列包含*号,代表同步成功(可能需要花费几分钟去同步,时间务必同步)
二:获取软件包如果使用自定义源,那么下列centos和redhat的操作可以省略#在所有节点执行centos:yum install yum-plugin-priorities -y #防止自动更新yum install centos-release-openstack-mitaka -y #如果不使用我的自定义yum那么请执行这一步redhat:yum install yum-plugin-priorities -yyum install https://rdoproject.org/repos/rdo-release.rpm -y红帽系统请去掉epel源
#在所有节点执行
yum upgradeyum install python-openstackclient -yyum install openstack-selinux -y
三:部署mariadb数据库控制节点:yum install mariadb mariadb-server python2-PyMySQL -y
编辑:/etc/my.cnf.d/openstack.cnf
bind-address = 控制节点管理网络ipdefault-storage-engine = innodbinnodb_file_per_tablemax_connections = 4096collation-server = utf8_general_cicharacter-set-server = utf8
启服务:systemctl enable mariadb.servicesystemctl start mariadb.servicemysql_secure_installation
四:为Telemetry 服务部署MongoDB控制节点:yum install mongodb-server mongodb -y
编辑:/etc/mongod.confbind_ip = 控制节点管理网络ipsmallfiles = true
启动服务:systemctl enable mongod.servicesystemctl start mongod.service
五:部署消息队列rabbitmq控制节点:yum install rabbitmq-server -y
启动服务:systemctl enable rabbitmq-server.servicesystemctl start rabbitmq-server.service
新建rabbitmq用户密码:rabbitmqctl add_user openstack che001
rabbitmqctldelete_user guest
为新建的用户openstack设定权限:rabbitmqctl set_permissions openstack ".*" ".*" ".*"
启动管理WEBrabbitmq-plugins enable rabbitmq_management(验证方式:http://172.16.209.104:15672/ 用户:openstack 密码:che001)
六:部署memcached缓存(为keystone服务缓存tokens)控制节点:yum install memcached python-memcached -ycat /etc/sysconfig/memcachedhttp://common.cnblogs.com/images/copycode.gif
PORT="11211"USER="memcached"MAXCONN="10240"CACHESIZE="64"#OPTIONS="-l 127.0.0.1,::1"OPTIONS="-l 0.0.0.0"http://common.cnblogs.com/images/copycode.gif
启动服务:systemctl enable memcached.servicesystemctl start memcached.service
PART2:认证服务keystone部署
一:安装和配置服务1.建库建用户mysql -u root -pCREATE DATABASE keystone;GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \IDENTIFIED BY 'che001';GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \IDENTIFIED BY 'che001';flush privileges;
2.yum install openstack-keystone httpd mod_wsgi -y
3.编辑/etc/keystone/keystone.conf
admin_token = che001#建议用命令制作token:openssl rand -hex 10#这里的作用主要是先手动指定admin_token,为了部署keystone,因为keystone没部署,认证环节还不能工作,等keystone部署好,会把手动指定admin_token认证方式去掉
connection = mysql+pymysql://keystone:che001@controller01/keystone
provider = fernet#Token Provider:UUID, PKI, PKIZ, or Fernet #http://blog.csdn.net/miss_yang_cloud/article/details/49633719
4.同步修改到数据库su -s /bin/sh -c "keystone-manage db_sync" keystone
5.初始化fernet keyskeystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
6.配置apache服务编辑:/etc/httpd/conf/httpd.confServerName controller01
编辑:/etc/httpd/conf.d/wsgi-keystone.conf新增配置Listen 5000Listen 35357
<VirtualHost *:5000> WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-public WSGIScriptAlias / /usr/bin/keystone-wsgi-public WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/httpd/keystone-error.log CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin> Require all granted </Directory></VirtualHost>
<VirtualHost *:35357> WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-admin WSGIScriptAlias / /usr/bin/keystone-wsgi-admin WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/httpd/keystone-error.log CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin> Require all granted </Directory></VirtualHost>
7.启动服务:systemctl enable httpd.servicesystemctl start httpd.service
二:创建服务实体和访问端点
1.实现配置管理员环境变量,用于获取后面创建的权限export OS_TOKEN=che001#要与前面的/etc/keystone/keystone.conf中的admin_token相同export OS_URL=http://controller01:35357/v3export OS_IDENTITY_API_VERSION=3
2.基于上一步给的权限,创建认证服务实体(目录服务)openstack service create --name keystone --description "OpenStack Identity" identity#如遇到报500错误,ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option,可把--description "OpenStack Identity"去掉3.基于上一步建立的服务实体,创建访问该实体的三个api端点
openstack endpoint create --region RegionOne \identity public http://controller01:5000/v3
openstack endpoint create --region RegionOne \identity internal http://controller01:5000/v3
openstack endpoint create --region RegionOne \identity admin http://controller01:35357/v3
三:创建域,租户,用户,角色,把四个元素关联到一起http://images2015.cnblogs.com/blog/885885/201701/885885-20170107123306159-1430805058.png建立一个公共的域名:openstack domain create --description "Default Domain" default
管理员:adminopenstack project create --domain default \--description "Admin Project" admin
openstack user create --domain default \--password-prompt admin
openstack role create admin
openstack role add --project admin --user admin admin
普通用户:demoopenstack project create --domain default \--description "Demo Project" demo
openstack user create --domain default \--password-prompt demo
openstack role create user
openstack role add --project demo --user demo user
为后续的服务创建统一租户service解释:后面每搭建一个新的服务都需要在keystone中执行四种操作:1.建租户 2.建用户 3.建角色 4.做关联后面所有的服务公用一个租户service,都是管理员角色admin,所以实际上后续的服务安装关于keysotne的操作只剩2,4openstack project create --domain default \--description "Service Project" service
四:验证操作:编辑:/etc/keystone/keystone-paste.ini在, , and 三个地方移走:admin_token_auth keystone部署好后,可以使用用户名密码进行验证产生token了,不需要手动指定admin_token了
unset OS_TOKEN OS_URL
openstack --os-auth-url http://controller01:35357/v3 \--os-project-domain-name default --os-user-domain-name default \--os-project-name admin --os-username admin token issuePassword: (输入openstack user create --domain default --password-prompt admin为admin设置的密码)http://images2015.cnblogs.com/blog/885885/201609/885885-20160925113641165-123357983.png
五:新建客户端脚本文件
管理员:admin-openrcexport OS_PROJECT_DOMAIN_NAME=defaultexport OS_USER_DOMAIN_NAME=defaultexport OS_PROJECT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=che001export OS_AUTH_URL=http://controller01:35357/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2
普通用户demo:demo-openrcexport OS_PROJECT_DOMAIN_NAME=defaultexport OS_USER_DOMAIN_NAME=defaultexport OS_PROJECT_NAME=demoexport OS_USERNAME=demoexport OS_PASSWORD=che001export OS_AUTH_URL=http://controller01:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2
效果:source admin-openrc # openstack token issuehttp://images2015.cnblogs.com/blog/885885/201609/885885-20160925114306754-514160978.png
part3:部署镜像服务一:安装和配置服务1.建库建用户mysql -u root -pCREATE DATABASE glance;GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \IDENTIFIED BY 'che001';GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \IDENTIFIED BY 'che001';flush privileges;
2.keystone认证操作:上面提到过:所有后续项目的部署都统一放到一个租户service里,然后需要为每个项目建立用户,建管理员角色,建立关联. admin-openrcopenstack user create --domain default --password-prompt glance
openstack role add --project service --user glance admin
建立服务实体openstack service create --name glance \--description "OpenStack Image" image
建端点openstack endpoint create --region RegionOne \image public http://controller01:9292
openstack endpoint create --region RegionOne \image internal http://controller01:9292
openstack endpoint create --region RegionOne \image admin http://controller01:9292
3.安装软件yum install openstack-glance -y
4.修改配置:编辑:/etc/glance/glance-api.conf
#这里的数据库连接配置是用来初始化生成数据库表结构,不配置无法生成数据库表结构#glance-api不配置database对创建vm无影响,对使用metada有影响#日志报错:ERROR glance.api.v2.metadef_namespacesconnection = mysql+pymysql://glance:che001@controller01/glance
auth_url = http://controller01:5000memcached_servers = controller01:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = glancepassword = che001
flavor = keystone
stores = file,httpdefault_store = filefilesystem_store_datadir = /var/lib/glance/images/
编辑:/etc/glance/glance-registry.conf
#这里的数据库配置是用来glance-registry检索镜像元数据connection = mysql+pymysql://glance:che001@controller01/glance
新建目录:mkdir /var/lib/glance/images/chown glance. /var/lib/glance/images/
同步数据库:(此处会报一些关于future的问题,自行忽略)su -s /bin/sh -c "glance-manage db_sync" glance
启动服务:systemctl enable openstack-glance-api.service \openstack-glance-registry.servicesystemctl start openstack-glance-api.service \openstack-glance-registry.service
二:验证操作:. admin-openrcwget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img(本地下载:wget http://172.16.209.100/cirros-0.3.4-x86_64-disk.img)
openstack image create "cirros" \--file cirros-0.3.4-x86_64-disk.img \--disk-format qcow2 --container-format bare \--public
openstack image list
part4:部署compute服务
一:控制节点配置1.建库建用户CREATE DATABASE nova_api;CREATE DATABASE nova;GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \IDENTIFIED BY 'che001';GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \IDENTIFIED BY 'che001';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \IDENTIFIED BY 'che001';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \IDENTIFIED BY 'che001';
flush privileges;
2.keystone相关操作
. admin-openrcopenstack user create --domain default \--password-prompt novaopenstack role add --project service --user nova adminopenstack service create --name nova \--description "OpenStack Compute" compute
openstack endpoint create --region RegionOne \compute public http://controller01:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \compute internal http://controller01:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \compute admin http://controller01:8774/v2.1/%\(tenant_id\)s
3.安装软件包:yum install openstack-nova-api openstack-nova-conductor \openstack-nova-console openstack-nova-novncproxy \openstack-nova-scheduler -y
4.修改配置:编辑/etc/nova/nova.conf
enabled_apis = osapi_compute,metadatarpc_backend = rabbitauth_strategy = keystone#下面的为管理ipmy_ip = 172.16.209.115use_neutron = Truefirewall_driver = nova.virt.firewall.NoopFirewallDriver
connection = mysql+pymysql://nova:che001@controller01/nova_api
connection = mysql+pymysql://nova:che001@controller01/nova
rabbit_host = controller01rabbit_userid = openstackrabbit_password = che001
auth_url = http://controller01:5000memcached_servers = controller01:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = novapassword = che001
#下面的为管理ipvncserver_listen = 172.16.209.115#下面的为管理ipvncserver_proxyclient_address = 172.16.209.115
lock_path = /var/lib/nova/tmp
5.同步数据库:(此处会报一些关于future的问题,自行忽略)su -s /bin/sh -c "nova-manage api_db sync" novasu -s /bin/sh -c "nova-manage db sync" nova
6.启动服务systemctl enable openstack-nova-api.service \openstack-nova-consoleauth.service openstack-nova-scheduler.service \openstack-nova-conductor.service openstack-nova-novncproxy.servicesystemctl start openstack-nova-api.service \openstack-nova-consoleauth.service openstack-nova-scheduler.service \openstack-nova-conductor.service openstack-nova-novncproxy.service
二:计算节点配置
1.安装软件包:yum install openstack-nova-compute libvirt-daemon-lxc -y
2.修改配置:编辑/etc/nova/nova.conf
rpc_backend = rabbitauth_strategy = keystone#计算节点管理网络ipmy_ip = 172.16.209.117use_neutron = Truefirewall_driver = nova.virt.firewall.NoopFirewallDriver
rabbit_host = controller01rabbit_userid = openstackrabbit_password = che001
enabled = Truevncserver_listen = 0.0.0.0#计算节点管理网络ipvncserver_proxyclient_address = 172.16.209.117#控制节点管理网络ipnovncproxy_base_url = http://172.16.209.115:6080/vnc_auto.html
api_servers = http://controller01:9292
lock_path = /var/lib/nova/tmp
3.如果在不支持虚拟化的机器上部署nova,请确认egrep -c '(vmx|svm)' /proc/cpuinfo结果为0则编辑/etc/nova/nova.confvirt_type = qemu
4.启动服务systemctl enable libvirtd.service openstack-nova-compute.servicesystemctl start libvirtd.service openstack-nova-compute.service
三:验证控制节点# source admin-openrc# openstack compute service list+----+------------------+--------------+----------+---------+-------+----------------------------+| Id | Binary | Host | Zone | Status| State | Updated At |+----+------------------+--------------+----------+---------+-------+----------------------------+|1 | nova-consoleauth | controller01 | internal | enabled | up | 2016-08-17T08:51:37.000000 ||2 | nova-conductor | controller01 | internal | enabled | up | 2016-08-17T08:51:29.000000 ||8 | nova-scheduler | controller01 | internal | enabled | up | 2016-08-17T08:51:38.000000 || 12 | nova-compute | compute01 | nova | enabled | up | 2016-08-17T08:51:30.000000 |
part5:部署网络服务
一:控制节点配置1.建库建用户CREATE DATABASE neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \IDENTIFIED BY 'che001';GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \IDENTIFIED BY 'che001';flush privileges;
2.keystone相关. admin-openrc
openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron \--description "OpenStack Networking" network
openstack endpoint create --region RegionOne \network public http://controller01:9696
openstack endpoint create --region RegionOne \network internal http://controller01:9696
openstack endpoint create --region RegionOne \network admin http://controller01:9696
3.安装软件包yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which-y
4.配置服务器组件 编辑 /etc/neutron/neutron.conf文件,并完成以下动作: 在[数据库]节中,配置数据库访问:core_plugin = ml2service_plugins = router#下面配置:启用重叠IP地址功能allow_overlapping_ips = Truerpc_backend = rabbitauth_strategy = keystonenotify_nova_on_port_status_changes = Truenotify_nova_on_port_data_changes = True
rabbit_host = controller01rabbit_userid = openstackrabbit_password = che001
connection = mysql+pymysql://neutron:che001@controller01/neutron
auth_url = http://controller01:5000memcached_servers = controller01:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = che001
auth_url = http://controller01:5000auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = novapassword = che001
lock_path = /var/lib/neutron/tmp
编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件 type_drivers = flat,vlan,vxlan,gretenant_network_types = vxlanmechanism_drivers = openvswitch,l2populationextension_drivers = port_security
flat_networks = provider
vni_ranges = 1:1000
enable_ipset = True
编辑/etc/nova/nova.conf文件:url = http://controller01:9696auth_url = http://controller01:5000auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = che001service_metadata_proxy = True
5.创建连接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
6.同步数据库:(此处会报一些关于future的问题,自行忽略)su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
7.重启nova服务systemctl restart openstack-nova-api.service
8.启动neutron服务systemctl enable neutron-server.servicesystemctl start neutron-server.service
二:网络节点配置
1. 编辑 /etc/sysctl.confnet.ipv4.ip_forward=1net.ipv4.conf.all.rp_filter=0net.ipv4.conf.default.rp_filter=0
2.执行下列命令,立即生效sysctl -p
3.安装软件包yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y
4.配置组件 编辑/etc/neutron/neutron.conf文件core_plugin = ml2service_plugins = routerallow_overlapping_ips = Truerpc_backend = rabbitauth_strategy = keystone
rabbit_host = controller01rabbit_userid = openstackrabbit_password = che001
lock_path = /var/lib/neutron/tmp
6、编辑 /etc/neutron/plugins/ml2/openvswitch_agent.ini文件:#下面ip为网络节点数据网络iplocal_ip=1.1.1.119bridge_mappings=external:br-ex
tunnel_types=gre,vxlanl2_population=Trueprevent_arp_spoofing=True
7.配置L3代理。编辑 /etc/neutron/l3_agent.ini文件:interface_driver=neutron.agent.linux.interface.OVSInterfaceDriverexternal_network_bridge=br-ex
8.配置DHCP代理。编辑 /etc/neutron/dhcp_agent.ini文件:
interface_driver=neutron.agent.linux.interface.OVSInterfaceDriverdhcp_driver=neutron.agent.linux.dhcp.Dnsmasqenable_isolated_metadata=True
9.配置元数据代理。编辑 /etc/neutron/metadata_agent.ini文件:nova_metadata_ip=controller01metadata_proxy_shared_secret=che001
10.启动服务
网路节点:systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \neutron-dhcp-agent.service neutron-metadata-agent.service
12.建网桥ovs-vsctl add-br br-exovs-vsctl add-port br-ex eth2(br-ex、eth2可以不设置IP,有三块网卡不用做以下设置)
注意,如果网卡数量有限,想用网路节点的管理网络网卡作为br-ex绑定的物理网卡#那么需要将网络节点管理网络网卡ip去掉,建立br-ex的配置文件,ip使用原管理网ipovs-vsctl add-br br-ex# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eno16777736"TYPE=EthernetONBOOT="yes"BOOTPROTO="none"# cat /etc/sysconfig/network-scripts/ifcfg-br-ex DEVICE=br-exTYPE=EthernetONBOOT="yes"BOOTPROTO="none"#eno16777736 MACHWADDR=bc:ee:7b:78:7b:a7IPADDR=172.16.209.10GATEWAY=172.16.209.1NETMASK=255.255.255.0DNS1=202.106.0.20DNS1=8.8.8.8NM_CONTROLLED=no #表示修改配置文件后不立即生效,而是在重启/重载network服务时生效
systemctl restart networkovs-vsctl add-port br-ex eth0
三:计算节点配置1. 编辑 /etc/sysctl.confnet.ipv4.conf.all.rp_filter=0net.ipv4.conf.default.rp_filter=0net.bridge.bridge-nf-call-iptables=1net.bridge.bridge-nf-call-ip6tables=1
2.sysctl -p
3.yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y
4.编辑 /etc/neutron/neutron.conf文件
rpc_backend = rabbitauth_strategy = keystone
rabbit_host = controller01rabbit_userid = openstackrabbit_password = che001
lock_path = /var/lib/neutron/tmp
5.编辑 /etc/neutron/plugins/ml2/openvswitch_agent.ini#下面ip为计算节点数据网络iplocal_ip = 1.1.1.117#bridge_mappings = vlan:br-vlantunnel_types = gre,vxlanl2_population = Trueprevent_arp_spoofing = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriverenable_security_group = True
7.编辑 /etc/nova/nova.conf
url = http://controller01:9696auth_url = http://controller01:5000auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = che001
8.启动服务systemctl enable neutron-openvswitch-agent.servicesystemctl start neutron-openvswitch-agent.servicesystemctl restart openstack-nova-compute.service
part6:部署控制面板dashboard在控制节点1.安装软件包yum install openstack-dashboard -y
2.配置/etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller01"
ALLOWED_HOSTS = ['*', ]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller01:11211', }}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2,}OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"TIME_ZONE = "UTC"
3.启动服务systemctl enable httpd.service memcached.servicesystemctl restart httpd.service memcached.service
4.验证;http://172.16.209.115/dashboard
总结:
[*]与keystone打交道的只有api层,所以不要到处乱配
[*]建主机的时候由nova-compute负责调用各个api,所以不要再控制节点配置啥调用
[*]ml2是neutron的core plugin,只需要在控制节点配置
[*]网络节点只需要配置相关的agent
[*]各组件的api除了接收请求外还有很多IT论坛功能,比方说验证请求的合理性,控制节点nova.conf需要配neutron的api、认证,因为nova boot时需要去验证用户提交网络的合理性,控制节点neutron.conf需要配nova的api、认证,因为你删除网络端口时需要通过nova-api去查是否有主机正在使用端口。计算几点nova.conf需要配neutron,因为nova-compute发送请求给neutron-server来创建端口。这里的端口值得是'交换机上的端口'
[*]不明白为啥?或者不懂我在说什么,请好好研究openstack各组件通信机制和主机创建流程,或者来听我的课哦,一般博文都不教真的。
网路故障排查:网络节点:# ip netns showqdhcp-e63ab886-0835-450f-9d88-7ea781636eb8qdhcp-b25baebb-0a54-4f59-82f3-88374387b1ecqrouter-ff2ddb48-86f7-4b49-8bf4-0335e8dbaa83# ip netns exec qrouter-ff2ddb48-86f7-4b49-8bf4-0335e8dbaa83 bash# ping -c2 www.baidu.comPING www.a.shifen.com (61.135.169.125) 56(84) bytes of data.64 bytes from 61.135.169.125: icmp_seq=1 ttl=52 time=33.5 ms64 bytes from 61.135.169.125: icmp_seq=2 ttl=52 time=25.9 ms
如果无法ping通,那么退出namespaceovs-vsctl del-br br-exovs-vsctl del-br br-intovs-vsctl del-br br-tunovs-vsctl add-br br-intovs-vsctl add-br br-exovs-vsctl add-port br-ex eth0systemctl restart neutron-openvswitch-agent.service neutron-l3-agent.service \neutron-dhcp-agent.service neutron-metadata-agent.service
openStack 基本操作使用我的环境:http://images2015.cnblogs.com/blog/885885/201701/885885-20170107182900144-729608402.png
http://images2015.cnblogs.com/blog/885885/201701/885885-20170107142059081-1571785680.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107142121737-1967759526.pngadmin用户创建网络管理员/系统/网络/创建网络http://images2015.cnblogs.com/blog/885885/201701/885885-20170107143404019-1265202151.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107143421128-1785798236.png http://images2015.cnblogs.com/blog/885885/201701/885885-20170107144214066-1300294065.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107144233956-1636966756.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107144252441-410557901.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107144304941-1291782841.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107144321612-653544702.png
普通用户demo:http://images2015.cnblogs.com/blog/885885/201701/885885-20170107150339987-106552875.png
创建用户网络 http://images2015.cnblogs.com/blog/885885/201701/885885-20170107150406175-1490778745.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107150419206-1059468025.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107150514472-1402427260.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107150534550-1792152027.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107150547862-1650048490.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107150603722-710179046.png
创建路由http://images2015.cnblogs.com/blog/885885/201701/885885-20170107150623909-853148782.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107150710003-1609531083.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107150723550-1751058720.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107150743878-848323981.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107150750206-1609203965.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107150756612-2107246799.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107150814956-1106255549.png创建云主机,创建一个连接demo-net网络的云主机http://images2015.cnblogs.com/blog/885885/201701/885885-20170107151622659-838477903.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107151629753-2057469031.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107151635425-1325980807.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107151643066-127829483.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107151716659-2095590442.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107151721706-754572000.png点击 vm1连接到控制台http://images2015.cnblogs.com/blog/885885/201701/885885-20170107152551081-1574665980.png输入用户:cirros 密码:cubswin:)ping 外部域名或地址,看网络是否通http://images2015.cnblogs.com/blog/885885/201701/885885-20170107152603628-1230197663.png 查看路由http://images2015.cnblogs.com/blog/885885/201701/885885-20170107155009722-55418865.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107155028050-1673709417.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107155035362-162401299.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107155049128-1990913409.png
绑定浮动IP,让外部主机能访问到云主机 http://images2015.cnblogs.com/blog/885885/201701/885885-20170107162325300-1124061243.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107162335581-1330609834.png
让外部能访问云主机的sshhttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107165044987-1943560063.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107165053144-1475052654.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107165058706-1944910392.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107165105128-521004569.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107165114816-747359328.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107184018612-1269843580.png 默认情况下,租户的网络之间不能通信,若要通信需要admin把他们的网络设置成共享,通过路由来转发新建项目ops,并新建两个用户ops1、ops2,新建组opsgroup,把前面两用户加入opsgroup组,项目osp项目中添加opsgroup组 osp1用户登录后,建立网络ops,子网172.16.10.0/24demo用户建立网络demo-sub2及相关子网172.16.1.0/24admin新建路由core-router,把 网络 demo-sub2、ops设成成共享,在网络拓扑中,把这两网络连接上core-router
demo用户建立云主机vm2,连接上demo-sub2网络,加入ssh-sec安全组,假设云主机IP为172.16.1.3osp1用户建立云主机vm_ops1,连接上ops网络,登录vm_ops1云主机,假设DHCP IP为172.16.10.3ping 172.16.1.3ssh cirros@172.16.1.3 看是否成功 http://images2015.cnblogs.com/blog/885885/201701/885885-20170107180506128-1093767398.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107180513206-860142159.pnghttp://images2015.cnblogs.com/blog/885885/201701/885885-20170107180518737-504541726.pnghttp://egon09.blog.51cto.com/9161406/1839667
OpenStack Mitaka(openstack最新版安装)安装
页:
[1]