I had deployed Red Hat OpenStack Platform 10 (Neuton) on my Dell box using Virtual Machines.
The deployment had 3 x Controller Nodes, 1 x Compute Node, and 3 x Ceph Nodes
I will not go through the entire deployment process, but will show some of the yaml files needed for the deployment.
Here is the command/script that was used to deploy the Overcloud.
Next, we will look at the storage-environment.yaml file used in this deployment.
As you see above rbd(Ceph) is used as the backend for Cinder. Ceph is also used for Cinder backup, for Nova ephemeral storage, for Glance backend to store images, and for Gnocchi backend.
The deployment had 3 x Controller Nodes, 1 x Compute Node, and 3 x Ceph Nodes
[root@openstack ~]# virsh list --all
Id Name State
----------------------------------------------------
249 undercloud running
278 overcloud-controller3 running
279 overcloud-ceph2 running
280 overcloud-compute1 running
281 overcloud-ceph3 running
282 overcloud-ceph1 running
285 overcloud-controller2 running
286 overcloud-controller1 running
- overcloud-compute2 shut off
[stack@undercloud ~]$ cat run_deploy.sh.ceph
#!/bin/bash
set -o verbose
source stackrc
pushd /home/stack
openstack overcloud deploy --templates ~/my_templates -e /home/stack/my_templates/advanced-networking.yaml -e /home/stack/my_templates/storage-environment.yaml --ntp-server 0.north-america.pool.ntp.org --control-flavor control --control-scale 3 --compute-flavor compute --compute-scale 1 --ceph-storage-flavor ceph-storage --ceph-storage-scale 3 --neutron-tunnel-types vxlan --neutron-network-type vxlan
echo DONE
[stack@undercloud ~]$ cat my_templates/storage-environment.yaml
## A Heat environment file which can be used to set up storage
## backends. Defaults to Ceph used as a backend for Cinder, Glance and
## Nova ephemeral storage.
resource_registry:
OS::TripleO::Services::CephMon: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-mon.yaml
OS::TripleO::Services::CephOSD: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-osd.yaml
OS::TripleO::Services::CephClient: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-client.yaml
OS::TripleO::NodeUserData: /home/stack/my_templates/firstboot/wipe-disks.yaml
parameter_defaults:
#### BACKEND SELECTION ####
## Whether to enable iscsi backend for Cinder.
CinderEnableIscsiBackend: false
## Whether to enable rbd (Ceph) backend for Cinder.
CinderEnableRbdBackend: true
## Cinder Backup backend can be either 'ceph' or 'swift'.
CinderBackupBackend: ceph
## Whether to enable NFS backend for Cinder.
# CinderEnableNfsBackend: false
## Whether to enable rbd (Ceph) backend for Nova ephemeral storage.
NovaEnableRbdBackend: true
## Glance backend can be either 'rbd' (Ceph), 'swift' or 'file'.
GlanceBackend: rbd
## Gnocchi backend can be either 'rbd' (Ceph), 'swift' or 'file'.
GnocchiBackend: rbd
#### CINDER NFS SETTINGS ####
## NFS mount options
# CinderNfsMountOptions: ''
## NFS mount point, e.g. '192.168.122.1:/export/cinder'
# CinderNfsServers: ''
#### GLANCE NFS SETTINGS ####
## Make sure to set `GlanceBackend: file` when enabling NFS
##
## Whether to make Glance 'file' backend a NFS mount
# GlanceNfsEnabled: false
## NFS share for image storage, e.g. '192.168.122.1:/export/glance'
## (If using IPv6, use both double- and single-quotes,
## e.g. "'[fdd0::1]:/export/glance'")
# GlanceNfsShare: ''
## Mount options for the NFS image storage mount point
# GlanceNfsOptions: 'intr,context=system_u:object_r:glance_var_lib_t:s0'
#### CEPH SETTINGS ####
## When deploying Ceph Nodes through the oscplugin CLI, the following
## parameters are set automatically by the CLI. When deploying via
## heat stack-create or ceph on the controller nodes only,
## they need to be provided manually.
## Number of Ceph storage nodes to deploy
# CephStorageCount: 0
## Ceph FSID, e.g. '4b5c8c0a-ff60-454b-a1b4-9747aa737d19'
# CephClusterFSID: ''
## Ceph monitor key, e.g. 'AQC+Ox1VmEr3BxAALZejqeHj50Nj6wJDvs96OQ=='
# CephMonKey: ''
## Ceph admin key, e.g. 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ=='
# CephAdminKey: ''
## Ceph client key, e.g 'AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw=='
# CephClientKey: ''
ExtraConfig:
ceph::profile::params::osds:
'/dev/vdb': {}
As you see above rbd(Ceph) is used as the backend for Cinder. Ceph is also used for Cinder backup, for Nova ephemeral storage, for Glance backend to store images, and for Gnocchi backend.
There are two other files that I'd like to share, the first one is the advanced_network.yam and the other being the nic-config file for the Ceph Node.
Next, we will look at the NIC config for the Ceph Storage Node
Nic 1 of the Ceph node was used for Provisioning, which Nic 2 is vlan-ed for Storage and Storage Magement.
After the deployment completes successfully, here is what you will see.
[stack@undercloud ~]$ cat my_templates/advanced-networking.yaml
# Enable the creation of Neutron networks for isolated Overcloud
# traffic and configure each role to assign ports (related
# to that role) on these networks.
resource_registry:
OS::TripleO::Network::External: /home/stack/my_templates/network/external.yaml
OS::TripleO::Network::InternalApi: /home/stack/my_templates/network/internal_api.yaml
OS::TripleO::Network::StorageMgmt: /home/stack/my_templates/network/storage_mgmt.yaml
OS::TripleO::Network::Storage: /home/stack/my_templates/network/storage.yaml
OS::TripleO::Network::Tenant: /home/stack/my_templates/network/tenant.yaml
# Management network is optional and disabled by default.
# To enable it, include environments/network-management.yaml
#OS::TripleO::Network::Management: /home/stack/my_templates/network/management.yaml
# Port assignments for the VIPs
OS::TripleO::Network::Ports::ExternalVipPort: /home/stack/my_templates/network/ports/external.yaml
OS::TripleO::Network::Ports::InternalApiVipPort: /home/stack/my_templates/network/ports/internal_api.yaml
OS::TripleO::Network::Ports::StorageVipPort: /home/stack/my_templates/network/ports/storage.yaml
OS::TripleO::Network::Ports::StorageMgmtVipPort: /home/stack/my_templates/network/ports/storage_mgmt.yaml
OS::TripleO::Network::Ports::RedisVipPort: /home/stack/my_templates/network/ports/vip.yaml
# Port assignments for service virtual IPs for the controller role
OS::TripleO::Controller::Ports::RedisVipPort: /home/stack/my_templates/network/ports/vip.yaml
# Port assignments for the controller role
OS::TripleO::Controller::Ports::ExternalPort: /home/stack/my_templates/network/ports/external.yaml
OS::TripleO::Controller::Ports::InternalApiPort: /home/stack/my_templates/network/ports/internal_api.yaml
OS::TripleO::Controller::Ports::StoragePort: /home/stack/my_templates/network/ports/storage.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort: /home/stack/my_templates/network/ports/storage_mgmt.yaml
OS::TripleO::Controller::Ports::TenantPort: /home/stack/my_templates/network/ports/tenant.yaml
#OS::TripleO::Controller::Ports::ManagementPort: /home/stack/my_templates/network/ports/management.yaml
# Port assignments for the compute role
OS::TripleO::Compute::Ports::ExternalPort: /home/stack/my_templates/network/ports/noop.yaml
OS::TripleO::Compute::Ports::InternalApiPort: /home/stack/my_templates/network/ports/internal_api.yaml
OS::TripleO::Compute::Ports::StoragePort: /home/stack/my_templates/network/ports/storage.yaml
OS::TripleO::Compute::Ports::StorageMgmtPort: /home/stack/my_templates/network/ports/noop.yaml
OS::TripleO::Compute::Ports::TenantPort: /home/stack/my_templates/network/ports/tenant.yaml
#OS::TripleO::Compute::Ports::ManagementPort: /home/stack/my_templates/network/ports/management.yaml
# Port assignments for the ceph storage role
OS::TripleO::CephStorage::Ports::ExternalPort: /home/stack/my_templates/network/ports/noop.yaml
OS::TripleO::CephStorage::Ports::InternalApiPort: /home/stack/my_templates/network/ports/noop.yaml
OS::TripleO::CephStorage::Ports::StoragePort: /home/stack/my_templates/network/ports/storage.yaml
OS::TripleO::CephStorage::Ports::StorageMgmtPort: /home/stack/my_templates/network/ports/storage_mgmt.yaml
OS::TripleO::CephStorage::Ports::TenantPort: /home/stack/my_templates/network/ports/noop.yaml
#OS::TripleO::CephStorage::Ports::ManagementPort: /home/stack/my_templates/network/ports/management.yaml
# NIC Configs for our roles
OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/my_templates/nic-configs/compute.yaml
OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/my_templates/nic-configs/controller.yaml
OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/my_templates/nic-configs/ceph-storage.yaml
# Enable the creation of Neutron networks for isolated Overcloud
# traffic and configure each role to assign ports (related
# to that role) on these networks.
parameter_defaults:
# Internal API used for private OpenStack Traffic
InternalApiNetCidr: 172.17.1.0/24
InternalApiAllocationPools: [{'start': '172.17.1.10', 'end': '172.17.1.200'}]
InternalApiNetworkVlanID: 101
# Tenant Network Traffic - will be used for VXLAN over VLAN
TenantNetCidr: 172.17.2.0/24
TenantAllocationPools: [{'start': '172.17.2.10', 'end': '172.17.2.200'}]
TenantNetworkVlanID: 201
StorageNetCidr: 172.17.3.0/24
StorageAllocationPools: [{'start': '172.17.3.10', 'end': '172.17.3.200'}]
StorageNetworkVlanID: 301
StorageMgmtNetCidr: 172.17.4.0/24
StorageMgmtAllocationPools: [{'start': '172.17.4.10', 'end': '172.17.4.200'}]
StorageMgmtNetworkVlanID: 401
# External Networking Access - Public API Access
ExternalNetCidr: 192.168.122.0/24
# Leave room for floating IPs in the External allocation pool (if required)
ExternalAllocationPools: [{'start': '192.168.122.100', 'end': '192.168.122.129'}]
# Set to the router gateway on the external network
ExternalInterfaceDefaultRoute: 192.168.122.1
# Add in configuration for the Control Plane
ControlPlaneSubnetCidr: "24"
ControlPlaneDefaultRoute: 172.16.0.1
EC2MetadataIp: 172.16.0.1
DnsServers: ['192.168.122.1', '8.8.8.8']
Next, we will look at the NIC config for the Ceph Storage Node
[stack@undercloud ~]$ cat /home/stack/my_templates/nic-configs/ceph-storage.yaml
heat_template_version: 2015-04-30
description: >
Software Config to drive os-net-config to configure multiple interfaces
for the ceph storage role.
parameters:
ControlPlaneIp:
default: ''
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ''
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ''
description: IP address/subnet on the internal API network
type: string
StorageIpSubnet:
default: ''
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ''
description: IP address/subnet on the storage mgmt network
type: string
TenantIpSubnet:
default: ''
description: IP address/subnet on the tenant network
type: string
ManagementIpSubnet: # Only populated when including environments/network-management.yaml
default: ''
description: IP address/subnet on the management network
type: string
ExternalNetworkVlanID:
default: 10
description: Vlan ID for the external network traffic.
type: number
InternalApiNetworkVlanID:
default: 20
description: Vlan ID for the internal_api network traffic.
type: number
StorageNetworkVlanID:
default: 30
description: Vlan ID for the storage network traffic.
type: number
StorageMgmtNetworkVlanID:
default: 40
description: Vlan ID for the storage mgmt network traffic.
type: number
TenantNetworkVlanID:
default: 50
description: Vlan ID for the tenant network traffic.
type: number
ExternalInterfaceDefaultRoute:
default: '10.0.0.1'
description: default route for the external network
type: string
ControlPlaneSubnetCidr: # Override this via parameter_defaults
default: '24'
description: The subnet CIDR of the control plane network.
type: string
ControlPlaneDefaultRoute: # Override this via parameter_defaults
description: The subnet CIDR of the control plane network.
type: string
DnsServers: # Override this via parameter_defaults
default: []
description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
type: json
EC2MetadataIp: # Override this via parameter_defaults
description: The IP address of the EC2 metadata server.
type: string
resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:
-
type: interface
name: nic1
use_dhcp: false
dns_servers: {get_param: DnsServers}
addresses:
-
ip_netmask:
list_join:
- '/'
- - {get_param: ControlPlaneIp}
- {get_param: ControlPlaneSubnetCidr}
routes:
-
ip_netmask: 0.0.0.0/0
next_hop: {get_param: ControlPlaneDefaultRoute}
# Optionally have this interface as default route
default: true
-
ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}
-
type: ovs_bridge
name: br-isolated
use_dhcp: false
members:
-
type: interface
name: nic2
# force the MAC address of the bridge to this interface
primary: true
-
type: vlan
vlan_id: {get_param: StorageNetworkVlanID}
addresses:
-
ip_netmask: {get_param: StorageIpSubnet}
-
type: vlan
vlan_id: {get_param: StorageMgmtNetworkVlanID}
addresses:
-
ip_netmask: {get_param: StorageMgmtIpSubnet}
outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value: {get_resource: OsNetConfigImpl}
Nic 1 of the Ceph node was used for Provisioning, which Nic 2 is vlan-ed for Storage and Storage Magement.
After the deployment completes successfully, here is what you will see.
[stack@undercloud ~]$ neutron net-list
+--------------------------------------+--------------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+--------------+-------------------------------------------------------+
| 542b5f6d-d0a6-4fd2-a905-996208a77525 | storage | 4449344f-089f-426f-bd5f-37591e9e71ea 172.17.3.0/24 |
| 9e2ab19a-3b29-47a7-8db8-995883e1511b | internal_api | 1901412a-d54b-4274-8042-7daacfe07bfc 172.17.1.0/24 |
| a9fedf2d-537c-461c-b15c-1f3fe2467172 | ctlplane | 869c6ad4-d133-4076-875f-347026dfee88 172.16.0.0/24 |
| b7bc5c80-a57b-40b7-ac5c-9beac1da84d7 | storage_mgmt | 2b724ccd-a9eb-4aa8-907a-14bcebd34e27 172.17.4.0/24 |
| c5392029-9438-4521-8202-f9a89648d8a5 | tenant | 3b4b55a3-97d1-40dc-b95c-51bd2012e2ae 172.17.2.0/24 |
| cef77887-476d-47ba-8d06-4fa65acbe5ff | external | 0fca9fa0-b3eb-4b25-a987-cc936a28d659 192.168.122.0/24 |
+--------------------------------------+--------------+-------------------------------------------------------+
[stack@undercloud ~]$ ironic node-list
+--------------------------------------+-----------------------+--------------------------------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-----------------------+--------------------------------------+-------------+--------------------+-------------+
| 945a30f3-dc7f-4ca4-946d-18f38a352e1e | overcloud-ceph1 | bdc05172-4e5f-4aaf-9758-bbe0d2cb2012 | power on | active | False |
| d78b8cac-4379-4825-bb22-4e9e9f259fcd | overcloud-ceph2 | 1208a491-157f-4f43-9267-017aba21b331 | power on | active | False |
| 4b50450a-eaee-4828-ab43-a96f36d52789 | overcloud-ceph3 | 2a658d35-64f7-40d4-9d54-142640279a86 | power on | active | False |
| 682b6813-5df2-4829-927a-a1b95e081119 | overcloud-compute1 | 8b626a49-ad7b-4e1e-9dec-8ca1abb5073e | power on | active | False |
| 2645e0ed-c403-4e16-afdd-b293726fd0eb | overcloud-compute2 | None | power off | available | False |
| af20af15-7d13-41fa-9862-2a191108be4c | overcloud-controller1 | 5ee03387-25bc-4368-9da4-e6a73cecd455 | power on | active | False |
| 4272e8f0-7409-4858-b17a-323ff0fbd43a | overcloud-controller2 | f55665f3-8926-4bc2-8068-74d9001600e6 | power on | active | False |
| a9325cf0-9b0f-405f-8cbe-697e4146ffbd | overcloud-controller3 | 40fc318e-22fe-4418-a063-eae58719da38 | power on | active | False |
+--------------------------------------+-----------------------+--------------------------------------+-------------+--------------------+-------------+
[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+
| bdc05172-4e5f-4aaf-9758-bbe0d2cb2012 | overcloud-cephstorage-0 | ACTIVE | - | Running | ctlplane=172.16.0.32 |
| 1208a491-157f-4f43-9267-017aba21b331 | overcloud-cephstorage-1 | ACTIVE | - | Running | ctlplane=172.16.0.26 |
| 2a658d35-64f7-40d4-9d54-142640279a86 | overcloud-cephstorage-2 | ACTIVE | - | Running | ctlplane=172.16.0.34 |
| 8b626a49-ad7b-4e1e-9dec-8ca1abb5073e | overcloud-compute-0 | ACTIVE | - | Running | ctlplane=172.16.0.27 |
| 40fc318e-22fe-4418-a063-eae58719da38 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=172.16.0.31 |
| 5ee03387-25bc-4368-9da4-e6a73cecd455 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=172.16.0.23 |
| f55665f3-8926-4bc2-8068-74d9001600e6 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=172.16.0.36 |
+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+
In the next post we will look into the details of Ceph.
1 comment:
You have done a good job now they will easily find your post.Thanks for posting
Full Stack Training in Chennai | Certification | Online Training Course| Full Stack Training in Bangalore | Certification | Online Training Course | Full Stack Training in Hyderabad | Certification | Online Training Course | Full Stack Developer Training in Chennai | Mean Stack Developer Training in Chennai | Full Stack Training | Certification | Full Stack Online Training Course
Post a Comment