Overview

The Director undercloud node is an all-in-one Red Hat OpenStack Platform (RHOSP) deployment which provides the services used to install and manage OpenStack overclouds. The deployment toolset is the Deployment service known as TripleO. TripleO uses other existing OpenStack components, such as Heat, and Ansible Playbooks, to provision, deploy, and configure bare-metal systems as OpenStack cloud nodes.

The undercloud is a deployment cloud for overclouds, in which the workload is the overcloud nodes themselves, such as the controller, compute, and storage nodes.

Components and Services

  • Identity Service (Keystone)

    The Identity service provides user authentication and authorization, but only to the undercloud’s OpenStack services.

  • Image Service (Glance)

    The Image service stores the initial images to be deployed to bare-metal nodes. These images contain the Red Hat Enterprise Linux (RHEL) operating system, a KVM hypervisor, and container runtimes.

  • Compute Service (Nova)

    The Compute service works with the Bare Metal service to provision nodes, by introspecting available systems to obtain hardware attributes. The Compute service’s scheduling function filters the available nodes to ensure that selected nodes meet role requirements.

  • Bare Metal Service (Ironic)

    The Bare Metal service manages and provisions physical machines. The ironic-inspector service performs introspection by PXE booting unregistered hardware. The undercloud uses an out-of-band management interface, such as IPMI, to perform power management during introspection.

  • Orchestration Service (Heat)

    The Orchestration service provides a set of YAML templates and node roles to define configuration and provisioning instructions for overcloud deployment. Default orchestration templates are located at /usr/share/openstack-tripleo-heat-templates, and are designed to be customized using environment parameter files.

  • Object Service (Swift)

    The undercloud Object store holds images, deployment logs, and introspection results.

  • Networking Service (Neutron)

    The Networking service configures interfaces for the required provisioning and external networks. The provisioning network provides DHCP and PXE boot functions for bare-metal nodes. The external network provides public connectivity.

Viewing services

When you complete the undercloud installation, the system will secretly place a stackrc file in the stack user’s home directory. And let me tell you, this file is a real gem! It’s like a “magic key” that lets you easily access various services on the undercloud.

The stackrc file will automatically “borrow” a few things from the stack user’s .bashrc file, and as a result, it gains admin-level access to the undercloud. With it, you can manage and interact with the undercloud services as if you were an administrator.

In this file, there’s one particularly important element called OS_AUTH_URL. It’s like a “satnav,” pointing to the public endpoint of the undercloud’s identity service. Simply put, it tells your system: “Hey, head over here to find the authentication service!” This way, you can easily communicate with the services on the undercloud.

1
(undercloud) [stack@director ~]$ source stackrc

Listing all services

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
(undercloud) [stack@director ~]$ openstack service list
+----------------------------------+------------------+-------------------------+
| ID | Name | Type |
+----------------------------------+------------------+-------------------------+
| 2a08c08cc51a4f299536fc66e4b748b6 | nova | compute |
| 313b7a22ef534d3fa367f50d7c9e2754 | mistral | workflowv2 |
| 417c10b71acb4c1aa40169e251b3d16d | zaqar-websocket | messaging-websocket |
| 4d957e71f3284818b3a0617218f446bd | neutron | network |
| 6402d082e73a459c93d5e0b70783b7e5 | placement | placement |
| 6f778463089446f49905f6842c25d92e | ironic-inspector | baremetal-introspection |
| 7aa774aa28c344e49aa9eb01de4900ec | zaqar | messaging |
| a4158e3f67a1472cb0798ebc979e4e3a | ironic | baremetal |
| a822dfa8c6da4695a702b46e38d0077d | heat | orchestration |
| ba7910222ac042c6a847a9a3c3c5074a | glance | image |
| bd92462118c042a1a55967372b4b695f | keystone | identity |
| d9a5f802093d48958acb7f6e857f0384 | swift | object-store |
+----------------------------------+------------------+-------------------------+

Listing all services’ endpoint

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
(undercloud) [stack@director ~]$ openstack endpoint list -c 'Service Type' -c 'Interface' -c 'URL'
+-------------------------+-----------+----------------------------------------------------+
| Service Type | Interface | URL |
+-------------------------+-----------+----------------------------------------------------+
| placement | internal | http://172.25.249.202:8778/placement |
| image | internal | http://172.25.249.202:9292 |
| baremetal | internal | http://172.25.249.202:6385 |
| messaging-websocket | admin | ws://172.25.249.202:9000 |
| placement | admin | http://172.25.249.202:8778/placement |
| identity | admin | http://172.25.249.202:35357 |
| compute | internal | http://172.25.249.202:8774/v2.1 |
| identity | public | https://172.25.249.201:13000 |
| baremetal-introspection | admin | http://172.25.249.202:5050 |
| messaging | internal | http://172.25.249.202:8888 |
| baremetal | public | https://172.25.249.201:13385 |
| messaging-websocket | internal | ws://172.25.249.202:9000 |
| image | admin | http://172.25.249.202:9292 |
| orchestration | internal | http://172.25.249.202:8004/v1/%(tenant_id)s |
| placement | public | https://172.25.249.201:13778/placement |
| image | public | https://172.25.249.201:13292 |
| compute | admin | http://172.25.249.202:8774/v2.1 |
| orchestration | public | https://172.25.249.201:13004/v1/%(tenant_id)s |
| object-store | public | https://172.25.249.201:13808/v1/AUTH_%(tenant_id)s |
| orchestration | admin | http://172.25.249.202:8004/v1/%(tenant_id)s |
| baremetal-introspection | internal | http://172.25.249.202:5050 |
| network | admin | http://172.25.249.202:9696 |
| network | public | https://172.25.249.201:13696 |
| workflowv2 | admin | http://172.25.249.202:8989/v2 |
| baremetal | admin | http://172.25.249.202:6385 |
| messaging-websocket | public | wss://172.25.249.201:9000 |
| object-store | admin | http://172.25.249.202:8080 |
| identity | internal | http://172.25.249.202:5000 |
| workflowv2 | public | https://172.25.249.201:13989/v2 |
| messaging | admin | http://172.25.249.202:8888 |
| messaging | public | https://172.25.249.201:13888 |
| workflowv2 | internal | http://172.25.249.202:8989/v2 |
| compute | public | https://172.25.249.201:13774/v2.1 |
| network | internal | http://172.25.249.202:9696 |
| object-store | internal | http://172.25.249.202:8080/v1/AUTH_%(tenant_id)s |
| baremetal-introspection | public | https://172.25.249.201:13050 |
+-------------------------+-----------+----------------------------------------------------+

Listing all services’ password

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
(undercloud) [stack@director ~]$ cat undercloud-passwords.conf
[auth]
undercloud_admin_password: B7Hk2yX2zly2tKwDrVh3TGFjp
undercloud_admin_token: B88pjtnpb7ch1bCMLUJqLdGAj
undercloud_aodh_password: LKNMhUQNwhmapU74r8k8Llraw
undercloud_barbican_password: Aoi7Ai5Osgh4tjcfI6cc8OyWh
undercloud_barbican_simple_crypto_kek: bMabXDjqvX2k70V3c9Sip1na-Waw9N4VB-FQMMbxlqM=
undercloud_ceilometer_password: fmQ3eAQ3Eo9lIv5DL9C8okMzz
undercloud_ceph_grafana_admin_password: n42P9DzDhGxh7Emg5RPGt1SAE
undercloud_ceph_dashboard_admin_password: KTUxEA7MHFo4CP4KqHNnz7PQc
undercloud_cinder_password: db0EqnRpJ2JYkaIHMsPsdSwtJ
undercloud_congress_password: 1guMjvHqlYUDdbE60XoJxapgw
undercloud_designate_password: y4m53qVg8ulS1x3nDgwK7CdG2
undercloud_ec2_api_password: vNSVPFS5l05F9cyDhtfOXLeyH
undercloud_etcd_initial_cluster_token: dxmxMFJZ98HQ6eYBnX0nCCdB2
undercloud_glance_password: 3l62WMyGenHPfIcoWDJhLlqaG
undercloud_gnocchi_password: dAvnYVTzljdBPyn0gyi4b2LOh
undercloud_ha_proxy_stats_password: bszPVap67n7B7h4DF6KD2gfrL
undercloud_heat_password: IVgrfc9UKjBTAUOsjcsDUOjCx
undercloud_heat_stack_domain_admin_password: 3DzFaXoB9DodArrdxEnA0Xclo
undercloud_ironic_password: rcmaIUqoE3ft4vwfgGzWnUV80
undercloud_libvirt_tls_password: McT2WnXIzmzWYfoHmg3nzDnP2
undercloud_manila_password: PrNre8t0uKmglYPMGJrSp5DpU
undercloud_mistral_password: u6iGEVtUbpjBus07ksfwB9v75
undercloud_mysql_clustercheck_password: zjywUyG4OhHUhZF3FHIpagjwe
undercloud_mysql_root_password: 2mo7UszHbM
undercloud_neutron_password: wvaPHleiR0P2IpNIrbZ5JcPsS
undercloud_nova_password: 12k2eWmgC0dMezTQeA2R1iLK1
undercloud_novajoin_password: 6bYmgcdhEtcxPbGslIhIiqYuP
undercloud_octavia_password: nm7jAQgPloK8O9lSi8dUW7aCy
undercloud_open_daylight_password: t6ZrMUpqMce7Rls9pbkDGg9Rc
undercloud_panko_password: iW8YEy4xPrzxHT7Uz7nP1CmfE
undercloud_pcsd_password: BSUPMkdeMJwIUsy3
undercloud_placement_password: oEwI1ubaJq4WwZ5HXhTc5357D
undercloud_rpc_password: BOIlN86QyW6NnvIJrtEMgM3jx
undercloud_notify_password: xhqWzQqHcdV1JCMhlSeM8lwzy
undercloud_rabbit_password: 8ttaCabqb9I9ncw3lgBVrItw4
undercloud_redis_password: xRelsoGSzeZqn12Lqk4RMQvjj
undercloud_sahara_password: VgJjC5rDGM8hjAair44G67j7s
undercloud_snmpd_readonly_user_password: Fxz5cmRlA0JhAK4tuzNF2Asae
undercloud_swift_password: 9VRDEP87fnpWEBWJ623ct4x3i
undercloud_zaqar_password: 8wS1kCJ9u5QjQQE37YqADmK5c

Viewing the network

Using DHCP and PXE boot, a dedicated high-throughput provisioning network is employed to prepare and deploy the overcloud nodes. This dedicated network acts like an “exclusive lane,” purpose-built to quickly and efficiently handle node configuration and deployment.

Once you’ve deployed the overcloud, the undercloud continues to play a supporting role on this provisioning network—managing the overcloud nodes and updating them as needed. Since the provisioning network is isolated, it’s completely separate from the internal overcloud traffic and external workload traffic, ensuring no interference. This separation results in a more stable overcloud environment and makes ongoing management much easier.

In simple terms, the undercloud sets up a “fast lane” for overcloud nodes—speeding up deployment while quietly managing things in the background, without affecting any other traffic. It’s a brilliantly designed setup!

dhcp range

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
(undercloud) [stack@director ~]$ cat undercloud.conf | egrep -v "(^#.*|^$)"
[DEFAULT]
container_images_file = /home/stack/containers-prepare-parameter.yaml
custom_env_files = /home/stack/custom-undercloud-params.yaml
enable_telemetry = false
generate_service_certificate = false
hieradata_override = /home/stack/hieradata.yaml
local_interface = eth1
local_ip = 172.25.249.200/24
overcloud_domain_name = overcloud.example.com
undercloud_admin_host = 172.25.249.202
undercloud_debug = false
undercloud_ntp_servers = 172.25.254.254
undercloud_public_host = 172.25.249.201
undercloud_service_certificate = /etc/pki/tls/certs/undercloud.pem
[ctlplane-subnet]
cidr = 172.25.249.0/24
dhcp_end = 172.25.249.59
dhcp_start = 172.25.249.51
gateway = 172.25.249.200
inspection_iprange = 172.25.249.150,172.25.249.180
masquerade = true

dhcp_end and dhcp_start: Represents the range of IP addresses dynamically assigned via DHCP.

IPs within this range are officially allocated to nodes once they’ve completed PXE boot and registration. These become the permanent management IPs (also known as ctlplane IPs).

In other words, these are the addresses that nodes will ultimately use after booting.

Specifies the temporary IP range assigned to nodes during PXE boot, used by ironic-inspector (or other node discovery tools).

inspection_iprange:

These IPs are temporarily leased to nodes that are being discovered or prepared for deployment. Once deployment is complete, the IPs are released, and the nodes will switch to using addresses from the dhcp_start to dhcp_end range.

ip info

In the following output, the br-ctlplane bridge is the 172.25.249.0 provisioning network. The eth1 interface is the 172.25.250.0 public network.

1
2
3
4
5
6
7
(undercloud) [stack@director ~]$ ip -br a
lo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 172.25.250.200/24 fe80::1f21:2be6:7500:dfef/64
eth1 UP fe80::5054:ff:fe00:f9c8/64
ovs-system DOWN
br-int DOWN
br-ctlplane UNKNOWN 172.25.249.200/24 172.25.249.202/32 172.25.249.201/32 fe80::5054:ff:fe00:f9c8/64

Check the IP range assigned to the br_ctlplane bridge

1
2
3
4
5
6
7
8
9
10
11
(undercloud) [stack@director ~]$ openstack subnet show ctlplane-subnet
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_pools | 172.25.249.51-172.25.249.59 |
| cidr | 172.25.249.0/24 |
| created_at | 2020-10-22T09:22:35Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 172.25.249.200

Viewing Provisioning Resources

At the very start of hardware configuration, the bare-metal provisioning service uses IPMI to give the managed nodes the “green light” to power on. Then, by default, these nodes will boot via PXE and begin “asking around”: they’ll request a temporary IP address from the DHCP server and fetch the bootable temporary kernel and ramdisk images. From there, they can begin the network boot process.

Listing all images

1
2
3
4
5
6
7
8
(undercloud) [stack@director ~]$ openstack image list
+--------------------------------------+------------------------+--------+
| ID | Name | Status |
+--------------------------------------+------------------------+--------+
| da2b80ea-5ffc-400c-bc0c-82b04facad9e | overcloud-full | active |
| 9826607c-dff5-45b0-b0c4-78c44b8665e9 | overcloud-full-initrd | active |
| bc188e61-99c5-4d32-8c32-e1e3d467149d | overcloud-full-vmlinuz | active |
+--------------------------------------+------------------------+--------+
  • overcloud-full

    This image typically contains a complete operating system along with all the necessary components required for deploying OpenStack.

  • overcloud-full-initrd

    This is the initrd image, which includes essential files and drivers needed during the boot process.

  • overcloud-full-vmlinuz

    This is the compressed Linux kernel image responsible for initiating the operating system.

Listing all registered nodes

1
2
3
4
5
6
7
8
9
10
(undercloud) [stack@director ~]$ openstack baremetal node list -c Name -c 'Power State' -c 'Provisioning State'
+-------------+-------------+--------------------+
| Name | Power State | Provisioning State |
+-------------+-------------+--------------------+
| controller0 | power on | active |
| compute0 | power on | active |
| computehci0 | power on | active |
| compute1 | power on | active |
| ceph0 | power on | active |
+-------------+-------------+--------------------+

Power management on the Undercloud

In a typical overcloud deployment, the nodes are mostly physical machines—such as blade servers or rack-mounted servers. These systems come with a very handy feature: unattended out-of-band management interfaces that allow remote power control. Extremely convenient!

As for the Bare Metal service, when nodes are being registered, it loads all the power management parameters in one go. These parameters are read from a configuration file called instackenv-initial.json. Simply put, this file acts like a “manual,” instructing the Bare Metal service on how to manage the power for each node.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
(undercloud) [stack@director ~]$ cat instackenv-initial.json
{
"nodes": [
{
"name": "controller0",
"arch": "x86_64",
"cpu": "2",
"disk": "40",
"memory": "8192",
"mac": [ "52:54:00:00:f9:01" ],
"pm_addr": "172.25.249.101",
"pm_type": "pxe_ipmitool",
"pm_user": "admin",
"pm_password": "password",
"pm_port": "623",
"capabilities": "node:controller0,boot_option:local"
},
{
"name": "compute0",
"arch": "x86_64",
"cpu": "2",
"disk": "40",
"memory": "6144",
"mac": [ "52:54:00:00:f9:02" ],
"pm_addr": "172.25.249.102",
"pm_type": "pxe_ipmitool",
"pm_user": "admin",
"pm_password": "password",
"pm_port": "623",
"capabilities": "node:compute0,boot_option:local"
},

Performing IPMI Power Management

check the IPMI address

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
(undercloud) [stack@director ~]$ cat instackenv-initial.json | jq '.nodes[] | {name: .name, pm_addr: .pm_addr, pm_user: .pm_user, pm_password: .pm_password}'
{
"name": "controller0",
"pm_addr": "172.25.249.101",
"pm_user": "admin",
"pm_password": "password"
}
{
"name": "compute0",
"pm_addr": "172.25.249.102",
"pm_user": "admin",
"pm_password": "password"
}
{
"name": "computehci0",
"pm_addr": "172.25.249.106",
"pm_user": "admin",
"pm_password": "password"
}
{
"name": "compute1",
"pm_addr": "172.25.249.112",
"pm_user": "admin",
"pm_password": "password"
}
{
"name": "ceph0",
"pm_addr": "172.25.249.103",
"pm_user": "admin",
"pm_password": "password"
}

Operation on Power

1
2
3
4
5
6
(undercloud) [stack@director ~]$ ipmitool -I lanplus \
> -U admin -P password -H 172.25.249.101 power status
Chassis Power is on

(undercloud) [stack@director ~]$ openstack baremetal node power on controller0
(undercloud) [stack@director ~]$ openstack baremetal node power off controller0