The Director undercloud node is an all-in-one Red Hat OpenStack Platform (RHOSP) deployment which provides the services used to install and manage OpenStack overclouds. The deployment toolset is the Deployment service known as TripleO. TripleO uses other existing OpenStack components, such as Heat, and Ansible Playbooks, to provision, deploy, and configure bare-metal systems as OpenStack cloud nodes.
The undercloud is a deployment cloud for overclouds, in which the workload is the overcloud nodes themselves, such as the controller, compute, and storage nodes.
Components and Services
Identity Service (Keystone)
The Identity service provides user authentication and authorization, but only to the undercloud’s OpenStack services.
Image Service (Glance)
The Image service stores the initial images to be deployed to bare-metal nodes. These images contain the Red Hat Enterprise Linux (RHEL) operating system, a KVM hypervisor, and container runtimes.
Compute Service (Nova)
The Compute service works with the Bare Metal service to provision nodes, by introspecting available systems to obtain hardware attributes. The Compute service’s scheduling function filters the available nodes to ensure that selected nodes meet role requirements.
Bare Metal Service (Ironic)
The Bare Metal service manages and provisions physical machines. The ironic-inspector service performs introspection by PXE booting unregistered hardware. The undercloud uses an out-of-band management interface, such as IPMI, to perform power management during introspection.
Orchestration Service (Heat)
The Orchestration service provides a set of YAML templates and node roles to define configuration and provisioning instructions for overcloud deployment. Default orchestration templates are located at /usr/share/openstack-tripleo-heat-templates, and are designed to be customized using environment parameter files.
Object Service (Swift)
The undercloud Object store holds images, deployment logs, and introspection results.
Networking Service (Neutron)
The Networking service configures interfaces for the required provisioning and external networks. The provisioning network provides DHCP and PXE boot functions for bare-metal nodes. The external network provides public connectivity.
Viewing services
When you complete the undercloud installation, the system will secretly place a stackrc file in the stack user’s home directory. And let me tell you, this file is a real gem! It’s like a “magic key” that lets you easily access various services on the undercloud.
The stackrc file will automatically “borrow” a few things from the stack user’s .bashrc file, and as a result, it gains admin-level access to the undercloud. With it, you can manage and interact with the undercloud services as if you were an administrator.
In this file, there’s one particularly important element called OS_AUTH_URL. It’s like a “satnav,” pointing to the public endpoint of the undercloud’s identity service. Simply put, it tells your system: “Hey, head over here to find the authentication service!” This way, you can easily communicate with the services on the undercloud.
Using DHCP and PXE boot, a dedicated high-throughput provisioning network is employed to prepare and deploy the overcloud nodes. This dedicated network acts like an “exclusive lane,” purpose-built to quickly and efficiently handle node configuration and deployment.
Once you’ve deployed the overcloud, the undercloud continues to play a supporting role on this provisioning network—managing the overcloud nodes and updating them as needed. Since the provisioning network is isolated, it’s completely separate from the internal overcloud traffic and external workload traffic, ensuring no interference. This separation results in a more stable overcloud environment and makes ongoing management much easier.
In simple terms, the undercloud sets up a “fast lane” for overcloud nodes—speeding up deployment while quietly managing things in the background, without affecting any other traffic. It’s a brilliantly designed setup!
dhcp_end and dhcp_start: Represents the range of IP addresses dynamically assigned via DHCP.
IPs within this range are officially allocated to nodes once they’ve completed PXE boot and registration. These become the permanent management IPs (also known as ctlplane IPs).
In other words, these are the addresses that nodes will ultimately use after booting.
Specifies the temporary IP range assigned to nodes during PXE boot, used by ironic-inspector (or other node discovery tools).
inspection_iprange:
These IPs are temporarily leased to nodes that are being discovered or prepared for deployment. Once deployment is complete, the IPs are released, and the nodes will switch to using addresses from the dhcp_start to dhcp_end range.
ip info
In the following output, the br-ctlplane bridge is the 172.25.249.0 provisioning network. The eth1 interface is the 172.25.250.0 public network.
1 2 3 4 5 6 7
(undercloud) [stack@director ~]$ ip -br a lo UNKNOWN 127.0.0.1/8 ::1/128 eth0 UP 172.25.250.200/24 fe80::1f21:2be6:7500:dfef/64 eth1 UP fe80::5054:ff:fe00:f9c8/64 ovs-system DOWN br-int DOWN br-ctlplane UNKNOWN 172.25.249.200/24 172.25.249.202/32 172.25.249.201/32 fe80::5054:ff:fe00:f9c8/64
Check the IP range assigned to the br_ctlplane bridge
At the very start of hardware configuration, the bare-metal provisioning service uses IPMI to give the managed nodes the “green light” to power on. Then, by default, these nodes will boot via PXE and begin “asking around”: they’ll request a temporary IP address from the DHCP server and fetch the bootable temporary kernel and ramdisk images. From there, they can begin the network boot process.
Listing all images
1 2 3 4 5 6 7 8
(undercloud) [stack@director ~]$ openstack image list +--------------------------------------+------------------------+--------+ | ID | Name | Status | +--------------------------------------+------------------------+--------+ | da2b80ea-5ffc-400c-bc0c-82b04facad9e | overcloud-full | active | | 9826607c-dff5-45b0-b0c4-78c44b8665e9 | overcloud-full-initrd | active | | bc188e61-99c5-4d32-8c32-e1e3d467149d | overcloud-full-vmlinuz | active | +--------------------------------------+------------------------+--------+
overcloud-full
This image typically contains a complete operating system along with all the necessary components required for deploying OpenStack.
overcloud-full-initrd
This is the initrd image, which includes essential files and drivers needed during the boot process.
overcloud-full-vmlinuz
This is the compressed Linux kernel image responsible for initiating the operating system.
Listing all registered nodes
1 2 3 4 5 6 7 8 9 10
(undercloud) [stack@director ~]$ openstack baremetal node list -c Name -c 'Power State' -c 'Provisioning State' +-------------+-------------+--------------------+ | Name | Power State | Provisioning State | +-------------+-------------+--------------------+ | controller0 | power on | active | | compute0 | power on | active | | computehci0 | power on | active | | compute1 | power on | active | | ceph0 | power on | active | +-------------+-------------+--------------------+
Power management on the Undercloud
In a typical overcloud deployment, the nodes are mostly physical machines—such as blade servers or rack-mounted servers. These systems come with a very handy feature: unattended out-of-band management interfaces that allow remote power control. Extremely convenient!
As for the Bare Metal service, when nodes are being registered, it loads all the power management parameters in one go. These parameters are read from a configuration file called instackenv-initial.json. Simply put, this file acts like a “manual,” instructing the Bare Metal service on how to manage the power for each node.
(undercloud) [stack@director ~]$ ipmitool -I lanplus \ > -U admin -P password -H 172.25.249.101 power status Chassis Power is on
(undercloud) [stack@director ~]$ openstack baremetal node power on controller0 (undercloud) [stack@director ~]$ openstack baremetal node power off controller0
Copyright Notice: This article is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Please attribute the original author and source when sharing.