【原创分享计划8】OpenStack-Train版本安装记录
  

静态路由 3621

{{ttag.title}}
好久不见,最近再学习openstack,前两天刚把环境搭建起来,安装过程有太多的坑了。整理了下文档,分享给社区的同学。
U版本已经有官方安装文档了,我前两天安装的时候还只有T版本,所以以下安装的适合Train版本。不过好像M版本之后主要组件的安装配置变化不打了。
闲话少说,直接开干。整个过程也是参考官方文档https://docs.openstack.org/install-guide/openstack-services.html

一、物理环境准备:
控制节点:1台,管理和业务复用ip1**.1**.23.120 centos7 4h4g hostname:controller
计算节点:1台 管理和业务复用ip
1**.1**.23.121 centos7 4h4g hostname:comupte1
块存储节点:1台 管理和业务复用ip 1**.1**.23.122 centos7 4h4g hostname:block1
二:基础环境安装
安装net-tools,关闭selinux,关闭防火墙
yum -y install net-tools
systemctl disable firewalld &&systemctl stop firewalld
vim /etc/selinux/config
编辑网卡,注意BOOTPRO =NONE
BOOTPROTO=”none”
编辑honsts绑定hosts
步骤省略
配置ntp服务(所有节点,其他节点通过控制节点授时)
/etc/chrony.conf
同属允许其他节点同步时间
allow
1**.1**.23.0/24
systemctl enable chronyd.service &&systemctl start chronyd.service
安装t版软件包
(这里描述的OpenStack软件包的设置需要在所有节点上完成:控制器,计算和块存储节点。)
yum install centos-release-openstack-train -y
yum upgrade -y
yum install python-openstackclient -y
yum install openstack-selinux -y
安装数据库(控制节点)
yum install mariadb mariadb-server python2-PyMySQL
vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address =
1**.1**.23.120
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
systemctl enable mariadb.service && systemctl start mariadb.service
mysql_secure_installation
安装消息队列,rabbit_mq (控制节点)
yum install rabbitmq-server
systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service
rabbitmqctl add_user openstack RABBIT_PASS
rabbitmqctl set_permissions openstack “.“ “.“ “.*”
安装身份验证用到的缓存memcached (控制节点)
yum install memcached python-memcached
修改添加管理地址
/etc/sysconfig/memcached
OPTIONS=”-l 1**.0.0.1,::1,controller”
systemctl enable memcached.service && systemctl start memcached.service
安装etcd(控制节点)
yum install etcd
修改配置文件
vim/etc/etcd/etcd.conf
/etc/etcd/etcd.conf
[Member]
ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”
ETCD_LISTEN_PEER_URLS=”http://
1**.1**.23.120:2380
ETCD_LISTEN_CLIENT_URLS=”http://
1**.1**.23.120:2379
ETCD_NAME=”controller”
[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS=”http://1**.1**.23.120:2380
ETCD_ADVERTISE_CLIENT_URLS=”http://
1**.1**.23.120:2379
ETCD_INITIAL_CLUSTER=”controller=http://
1**.1**.23.120:2380
ETCD_INITIAL_CLUSTER_TOKEN=”etcd-cluster-01”
ETCD_INITIAL_CLUSTER_STATE=”new”
systemctl enable etcd && systemctl start etcd
三:安装keystone
控制节点安装keyston
建立keystone数据库
mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’localhost’ IDENTIFIED BY ‘KEYSTONE_DBPASS’;
GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’%’ IDENTIFIED BY ‘KEYSTONE_DBPASS’;
安装和配置keystone 和httpd
yum install openstack-keystone httpd mod_wsgi
修改配置文件
vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
[token]
provider = fernet
su -s /bin/sh -c “keystone-manage db_sync” keystone
keystone-manage fernet_setup —keystone-user keystone —keystone-group keystone
keystone-manage credential_setup —keystone-user keystone —keystone-group keystone
keystone-manage bootstrap —bootstrap-password **_PASS —bootstrap-**-url http://controller:5000/v3/ —bootstrap-internal-url http://controller:5000/v3/ —bootstrap-public-url http://controller:5000/v3/ —bootstrap-region-id RegionOne

配置httpd
etc/httpd/conf/httpd.conf
ServerName controller
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
启动httpd
systemctl enable httpd.service && systemctl start httpd.service
export OS_USERNAME=** && export OS_PASSWORD=**_PASS && export OS_PROJECT_NAME=** && export OS_USER_DOMAIN_NAME=Default && export OS_PROJECT_DOMAIN_NAME=Default && export OS_AUTH_URL=http://controller:5000/v3 && export OS_IDENTITY_API_VERSION=3
openstack domain create —description “An Example Domain” example
openstack project create —domain default —description “Service Project” service
openstack project create —domain default —description “Demo Project” myproject
openstack user create —domain default —password-prompt myuser
openstack role create myrole
openstack role add —project myproject —user myuser myrole
openstack —os-auth-url http://controller:5000/v3 —os-project-domain-name Default —os-user-domain-name Default —os-project-name ** —os-username ** token issue
创建openstack 客户端脚本
创建和编辑**-openrc文件并添加以下内容:
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=**
export OS_USERNAME=**
export OS_PASSWORD=**_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
四:安装glance
创建数据库
mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@’localhost’ IDENTIFIED BY ‘GLANCE_DBPASS’;
GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@’%’ IDENTIFIED BY ‘GLANCE_DBPASS’;
创建glance用户
openstack user create —domain default —password-prompt glance
将**角色添加到glance用户和 service项目:
openstack role add —project service —user glance **
创建glance服务实体:
openstack service create —name glance —description “OpenStack Image” image
openstack endpoint create —region RegionOne image public http://controller:9292
openstack endpoint create —region RegionOne image internal http://controller:9292
openstack endpoint create —region RegionOne image ** http://controller:9292
安装和配置glance
yum install openstack-glance
编辑配置文件:
vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
su -s /bin/sh -c “glance-manage db_sync” glance
systemctl enable openstack-glance-api.service && systemctl start openstack-glance-api.service
五:安装placement
创建数据库
mysql -u root -p
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@’localhost’ IDENTIFIED BY ‘PLACEMENT_DBPASS’;
GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@’%’ IDENTIFIED BY ‘PLACEMENT_DBPASS’;
配置用户
openstack user create —domain default —password-prompt placement
openstack role add —project service —user placement **
openstack service create —name placement —description “Placement API” placement
openstack endpoint create —region RegionOne placement public http://controller:8778
openstack endpoint create —region RegionOne placement internal http://controller:8778
openstack endpoint create —region RegionOne placement ** http://controller:8778
安装placement
yum install openstack-placement-api
编辑配置文件
vim /etc/placement/placement.conf
[placement_database]
connection = mysql+pymysql://placementLACEMENT_DBPASS@controller/placement
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS
su -s /bin/sh -c “placement-manage db sync” placement
systemctl restart httpd

六:安装nova
配置控制节点
mysql -u root -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@’localhost’ IDENTIFIED BY ‘NOVA_DBPASS’;
GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@’%’ IDENTIFIED BY ‘NOVA_DBPASS’;
GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’localhost’ IDENTIFIED BY ‘NOVA_DBPASS’;
GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’%’ IDENTIFIED BY ‘NOVA_DBPASS’;
GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@’localhost’ IDENTIFIED BY ‘NOVA_DBPASS’;
GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@’%’ IDENTIFIED BY ‘NOVA_DBPASS’;
openstack user create —domain default —password-prompt nova
openstack role add —project service —user nova **
openstack service create —name nova —description “OpenStack Compute” compute
openstack endpoint create —region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create —region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create —region RegionOne compute ** http://controller:8774/v2.1
安装nova
yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler
编辑配置文件
vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
[DEFAULT]
my_ip = 1**.1**.23.120
[DEFAULT]
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
su -s /bin/sh -c “nova-manage api_db sync” nova
su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova
su -s /bin/sh -c “nova-manage cell_v2 create_cell —name=cell1 —verbose” nova
su -s /bin/sh -c “nova-manage db sync” nova
su -s /bin/sh -c “nova-manage cell_v2 list_cells” nova
systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service && systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
接下来安装配置计算节点
yum install openstack-nova-compute
cp /etc/nova/nova.conf /etc/nova/nova.conf.bk && grep -Ev ‘^$|#’ /etc/nova/nova.conf.bk >/etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
[api]
auth_strategy = keystone
[keystone_authtoken]

www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS

[DEFAULT]
my_ip = 1**.1**.23.121
[DEFAULT]
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
[libvirt]
virt_type = qemu
systemctl enable libvirtd.service openstack-nova-compute.service && systemctl start libvirtd.service openstack-nova-compute.service
接下来切换到控制节点操作,发现计算节点
openstack compute service list —service nova-compute
su -s /bin/sh -c “nova-manage cell_v2 discover_hosts —verbose” nova

七:安装neutron
在控制节点安装neutron ##这次实验安装的provider模式的网络
mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’localhost’ IDENTIFIED BY ‘NEUTRON_DBPASS’;
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’%’ IDENTIFIED BY ‘NEUTRON_DBPASS’;
openstack user create —domain default —password-prompt neutron
输入密码NEUTRON_PASS
openstack role add —project service —user neutron **
openstack service create —name neutron —description “OpenStack Networking” network
openstack endpoint create —region RegionOne network public http://controller:9696
openstack endpoint create —region RegionOne network internal http://controller:9696
openstack endpoint create —region RegionOne network ** http://controller:9696
安装组件
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
配置文件
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bk && grep -Ev ‘^$|#’ /etc/neutron/neutron.conf.bk >/etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
[DEFAULT]
core_plugin = ml2
service_plugins =
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[DEFAULT]
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova] ###这行自己添加
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan
[ml2]
tenant_network_types =
[ml2]
mechanism_drivers = linuxbridge
[ml2]
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = true
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bk && grep -Ev ‘^$|#’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bk >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = providerROVIDER_INTERFACE_NAME
注意:PROVIDER_INTERFACE_NAME =替换为网卡名字
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置系统文件
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
执行:
modprobe br_netfilter
sysctl -p
cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak &&grep -Ev ‘^$|#’ /etc/neutron/dhcp_agent.ini.bak> /etc/neutron/dhcp_agent.ini
vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c “neutron-db-manage —config-file /etc/neutron/neutron.conf —config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service &&systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
接下来安装计算节点的neutron组件
yum install openstack-neutron-linuxbridge ebtables ipset
vim /etc/neutron/neutron.conf
vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak && grep -Ev ‘^$|#’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak>/etc/neutron/plugins/ml2/linuxbridge_agent.ini
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth1
eth1替换为实际网卡名称
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
vim /etc/sysctl.conf
加入下面参数
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
执行
modprobe br_netfilter
sysctl -p
vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service && systemctl start neutron-linuxbridge-agent.service

八:安装dashboard
控制节点安装dashboard
yum install openstack-dashboard
vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = “controller”
ALLOWED_HOSTS = [‘one.example.com’, ‘two.example.com’,’*’]
SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘controller:11211’,
}
}
OPENSTACK_KEYSTONE_URL = ““ % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
“identity”: 3,
“image”: 2,
“volume”: 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default”
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”
OPENSTACK_NEUTRON_NETWORK = {

‘enable_router’: False,
‘enable_quotas’: False,
‘enable_distributed_router’: False,
‘enable_ha_router’: False,
‘enable_lb’: False,
‘enable_firewall’: False,
‘enable_vpn’: False,
‘enable_fip_topology_check’: False,
}
TIME_ZONE = “Asia/Shanghai”
重建apache的web配置文件
cd /usr/share/openstack-dashboard
python manage.py make_web_conf —apache > /etc/httpd/conf.d/openstack-dashboard.conf
ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf
systemctl restart httpd.service memcached.service
九:安装cinder
首先配置控制节点
创建相关用户
mysql -u root -p
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’localhost’ IDENTIFIED BY ‘CINDER_DBPASS’;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’%’ IDENTIFIED BY ‘CINDER_DBPASS’;
openstack user create —domain default —password-prompt cinder
输入CINDER_PASS
openstack role add —project service —user cinder **
openstack service create —name cinderv2 —description “OpenStack Block Storage” volumev2
openstack service create —name cinderv3 —description “OpenStack Block Storage” volumev3
openstack endpoint create —region RegionOne volumev2 public
openstack endpoint create —region RegionOne volumev2 internal
openstack endpoint create —region RegionOne volumev2 **
openstack endpoint create —region RegionOne volumev3 public
openstack endpoint create —region RegionOne volumev3 internal
openstack endpoint create —region RegionOne volumev3 **
安装配置cinder
yum install openstack-cinder
vim /etc/cinder/cinder.conf
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[DEFAULT]
my_ip = 1**.1**.1.120 ##管理ip
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
su -s /bin/sh -c “cinder-manage db sync” cinder
vim /etc/nova/nova.conf
[oslo_concurrency]
os_region_name = RegionOne
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service && systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
接下来配置存储节点
yum install lvm2 device-mapper-persistent-data
systemctl enable lvm2-lvmetad.service && systemctl start lvm2-lvmetad.service
创建lvm存储卷
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
vim/etc/lvm/lvm.conf
devices {

filter = [ “a/sdb/“, “r/.*/“]
yum install openstack-cinder targetcli python-keystone
vim/etc/cinder/cinder.conf
vim /etc/cinder/cinder.conf
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[DEFAULT]
my_ip = 1**.1**.23.122 ##管理网卡IP
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
[DEFAULT]
enabled_backends = lvm
[DEFAULT]
glance_api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
systemctl enable openstack-cinder-volume.service target.service && systemctl start openstack-cinder-volume.service target.service
主要模块安装完毕,修改本机hosts,可以进入dashboard进行管理配置


打赏鼓励作者,期待更多好文!

打赏
5人已打赏

Sangfor_闪电回_朱丽 发表于 2020-5-29 09:34
  
您好,感谢您参与社区原创分享计划8,您的文章已被收录到计划中,交由专家评审小组评审,文章标签在专家评审后设置,S奖励预计在一周后到账,其他奖励在活动结束后统一安排发放!发文越多,奖励越多,期待您更多的精彩文章哦!:感恩:
关于技术文章的管理流程,请参考:http://bbs.sangfor.com.cn/forum.php?mod=viewthread&tid=90279

新手031815 发表于 2020-5-29 09:42
  
感谢分享
新手222135 发表于 2020-5-29 09:45
  
不错啊,现在Openstack比原来更好安装些。
新手612152 发表于 2020-5-29 10:13
  
感谢分享
JM 发表于 2020-5-29 10:27
  
感谢分享
新手741261 发表于 2020-5-29 11:43
  

感谢分享
一一氵 发表于 2020-5-29 12:13
  
谢谢分享,有助工作
gqce 发表于 2020-5-30 15:17
  
感谢分享
SANGFOR_45083 发表于 2020-6-1 16:03
  
优秀!!!
发表新帖
热门标签
全部标签>
每日一问
技术笔记
技术盲盒
技术咨询
功能体验
干货满满
新版本体验
2023技术争霸赛专题
标准化排查
产品连连看
GIF动图学习
信服课堂视频
自助服务平台操作指引
运维工具
技术晨报
安装部署配置
每日一记
用户认证
通用技术
秒懂零信任
安全攻防
云计算知识
SDP百科
设备维护
深信服技术支持平台
答题自测
sangfor周刊
资源访问
排障笔记本
社区帮助指南
畅聊IT
专家问答
技术圆桌
在线直播
MVP
网络基础知识
升级
上网策略
测试报告
日志审计
问题分析处理
流量管理
原创分享
解决方案
VPN 对接
项目案例
SANGFOR资讯
专家分享
技术顾问
信服故事
功能咨询
终端接入
授权
迁移
山东区技术晨报
地址转换
虚拟机
存储
加速技术
产品预警公告
玩转零信任
信服圈儿
S豆商城资讯
技术争霸赛
「智能机器人」
追光者计划
答题榜单公布
纪元平台
卧龙计划
华北区拉练
天逸直播
以战代练
文档捉虫活动
齐鲁TV
华北区交付直播
每周精选

本版达人

新手89785...

本周建议达人

YangZhe...

本周分享达人