Wednesday, 20 August 2025

Installing OpenShift on bare metal using Full Control (UPI) – Quick Step-by-Step Guide | Maximo

This guide explains how to deploy OpenShift on Bare Metal (UPI) in a VMware environment using a Service Node, Bootstrap Node, Control Plane, and Worker Nodes.

In VMware with Service Node, Bootstrap, and Master/worker nodes.

Log in to Red Hat Console: 

Goto - > Clusters -> Data Center -> Other Datacenter Options -> (Bare Mtail /x86_64)



References:

 1

GitHub - ryanhay/ocp4-metal-install: Install OpenShift 4 on Bare Metal - UPI

 2

How to install OpenShift 4 on Bare Metal - User Provisioned Infrastructure (UPI)

 3

How to Install OpenShift 4 on Bare Metal through User Provisioned Infrastructure (UPI)

 4

RedHat OpenShift Bootcamp | HA Cluster Deployment with UPI method part1

 



Requirements:

0

Service node/bastion

Any linux, I am using Rocky

2 interfaces:

Eth0 , eth1

Eth0 for external using NAT automatic ip.

 

 

Eth1 is internal hostonly. (192.168.22.1/24)

Services

Firewall/router, DNS, webserver, ingress/LoadBalancer,dhcp,

Openshift Installer

Installer script from console.redhat

Pull secret

From redhat console

Cli agent tools

Client tools

Redhat core os boot iso file to boot

Rhcos*.io

Actual Redhat core OS raw.gz file

Rhcos*.raw.gz

Ignition files

Run with raw.gz files on nodes with coreos-installer command

 

Timezone

UTC, same as coreos by default

 

 

NAT, IP forwarding

Enable using firewalld

From internal to external

1

Bootstrap node

based on the installer

nmtui :ip 22.200, gw 22.1, ns: 22.1, search domain: ocp.lan

2

Masters

based on the installer

nmtui :ip 22.201-203, gw 22.1, ns: 22.1, search domain: ocp.lan

3

Workers

based on the installer

nmtui :ip 22.211-213, gw 22.1, ns: 22.1, search domain: ocp.lan

4

Vmware vmx files

In all VMs' vmx files, add this before booting

Disk.EnableUUID = "TRUE"

5

Vm storage

Check drive type

·         SCSI / SATA controllers → Devices show up as /dev/sdX.

·         NVMe controllers → Devices show up as /dev/nvmeXnY

lsblk


My VMWare Settings (Virtual Network Editor)

To simulate a production-like isolated OpenShift environment, I used a host-only network as a private subnet accessible via a bastion (service) node. The bastion has an additional NAT interface (vmnet8) to share the host’s internet, acting as a gateway for the OpenShift nodes.

 


 


1.    Service Node Setup:

a)     Network and Firewall

The service node functions as the gateway, DNS server, web server, firewall, and load balancer. It uses two network interfaces to route traffic between the internal and external networks in both directions.

Setup ip

Nmtui

nmtui

Nmtui to set ip

Ens192: automatic ip

 

Ens224: 192.168.22.1/24

Dns: 127.0.0.1

Search domain: ocp.lan

Enable: Never use this network for default route

nmcli connection up ens224

Assigning zones

[root@ocp-svc ~]# nmcli connection modify ens192 connection.zone external

[root@ocp-svc ~]# nmcli connection modify ens224 connection.zone internal

[root@ocp-svc ~]# firewall-cmd --get-active-zones

Enable nat

[root@ocp-svc ~]# firewall-cmd --zone=external --add-masquerade --permanent

[root@ocp-svc ~]# firewall-cmd --zone=internal --add-masquerade --permanent

Enable packet forwarding

[root@ocp-svc ~]# cat /proc/sys/net/ipv4/ip_forward

[root@ocp-svc ~]# firewall-cmd --ddirect --add-rule ipv4 filter FORWARD 0 -I ens224 -o ens192 -j ACCEPT --permanent

[root@ocp-svc ~]# firewall-cmd --ddirect --add-rule ipv4 filter FORWARD 0 -o ens224 -i ens192 -m state --state RELATED,ESTABLISHED -j ACCEPT –permanent

[root@ocp-svc ~]# cat /proc/sys/net/ipv4/ip_forward

1

verify

firewall-cmd --reload

firewall-cmd --list-all --zone=internal

firewall-cmd --list-all --zone=external

firewall-cmd --get-active-zones

a1)    Yum Configuration in Rehdat based OS

1.

for Service Node:

VM > Removeable Devices > Network Adapter 2 > Connect. , Settings and verify that iso is added.




2

Mount /dev/sr0 /mnt

[root@ocp-svc ~]# mount dvc/iso into /mnt

 

3

Create a yum repo, for example, dvd

[root@ocp-svc ~]# vim /etc/yum.repos.d/dvd.repo

[AppStream]

baseurl=file:///mnt/AppStream

enabled=1

gpgcheck=0

 

[BaseOS]

baseurl=file:///mnt/BaseOS

enabled=1

gpgcheck=0

[root@ocp-svc ~]# yum clean all

Now try to install, for instance

[root@ocp-svc ~]# yum install git -y

 

b)    Git repo

1.

Install git and clone the repository

[root@ocp-svc ~]# https://github.com/ryanhay/ocp4-metal-install.git

[root@ocp-svc ~]# ls ocp4-metal-install/

dhcpd.conf  diagram  dns  haproxy.cfg  install-config.yaml  manifest  README.md

 

c)     DNS

This is our DNS (bind) server, and we will use the Named service.


Install bind server

[root@ocp-svc ~]# install bind bind-utils -y

 

copy configs

copy config files and zone folder (forward and reverse.zone)

[root@ocp-svc ~]# cp ~/ocp4-metal-install/dns/named.conf /etc/named.conf

[root@ocp-svc ~]# cp -R ~/ocp4-metal-install/dns/zones /etc/named/

 

DNS

Assign the worker node entries to the master node IPs, since the masters are configured as schedulable and no dedicated workers are available initially. Once the control plane is fully operational, you can scale and add separate worker nodes.

 

 

Forward

Vi /etc/named/zones/db.ocp.lan

 

; Worker Nodes

ocp-w-1.lab.ocp.lan.        IN      A      192.168.22.201

ocp-w-2.lab.ocp.lan.        IN      A      192.168.22.212[AE1] 

Reverse

[root@ocp-svc ~]# cat /etc/named/zones/db.reverse

 

 

;

201    IN    PTR    ocp-w-1.lab.ocp.lan.

212    IN    PTR    ocp-w-2.lab.ocp.lan.[AE2] 

Restart dns

[root@ocp-svc ~]# systemctl enable named

[root@ocp-svc ~]# systemctl start named

[root@ocp-svc ~]# systemctl status named

 


 [AE1]This can be add later, once we add worker

 [AE2]This can be add later, once we add worker

At this stage, change the DNS on the external network interface also, and point it to 127.0.0.1

Check

dig ocp.lan -> 8.8.8.8

Change

DNS: 127.0.0.1

Enable: Ignore automatically Obtained DNS parameters

Verify

dig ocp.lan -> ocp-bootstrap.ocp.lan

dig -x 192.168.22.200 - > ocp-bootstrap.ocp.lan

 

 

d)    Firewall

 

Use these firewall commands to enable ports

 

 

 

firewall-cmd --add-port=53/udp --zone=internal --permanent

firewall-cmd --add-port=53/tcp --zone=internal --permanent

firewall-cmd --add-port=8080/tcp --zone=internal --permanent

firewall-cmd --add-port=6443/tcp --zone=internal --permanent # kube-api-server on control plane nodes

firewall-cmd --add-port=6443/tcp --zone=external --permanent # kube-api-server on control plane nodes

firewall-cmd --add-port=22623/tcp --zone=internal --permanent # machine-config server

firewall-cmd --add-service=http --zone=internal --permanent # web services hosted on worker nodes

firewall-cmd --add-service=http --zone=external --permanent # web services hosted on worker nodes

firewall-cmd --add-service=https --zone=internal --permanent # web services hosted on worker nodes

firewall-cmd --add-service=https --zone=external --permanent # web services hosted on worker nodes

firewall-cmd --add-port=9000/tcp --zone=external --permanent # HAProxy Stats

firewall-cmd --reload

firewall-cmd --zone=internal --set-target=ACCEPT --permanent

firewall-cmd --add-service=dhcp --zone=internal --permanent

firewall-cmd --reload        

 

Verify

for z in external internal; do firewall-cmd --zone=$z --list-all; done

e)     Openshift Installer

2

Make a separate folder for installation files. ocp-install folder

[root@ocp-svc ~]#  mkdir ~/ocp-install

[root@ocp-svc ~]# cp ocp4-metal-install/install-config.yaml ocp-install/

[root@ocp-svc ~]# vi ocp-install/install-config.yaml

3.

Create an SSH key pair and paste it in the file ocp-install/install-config.yaml

[root@ocp-svc ~]# ssh-keygen

Enter , enter, enter finish.

[root@ocp-svc ~]# cat ~/.ssh/id_rsa.pub

Also update number of master, and KubernetesOVN instead of OpenshiftSDN.

Keep the workers 0, because we will add them later.

4.

Copy the pull secret from the Red Hat Console and paste it into the ocp-install/install-config.yaml file.

 

 

 

5

 

 

Download client and installer  home:

mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/4.19.9/

1.      openshift-client-linux.tar.gz

2.      openshift-install-linux-4.19.23.tar.gz

[root@ocp-svc ~]# tar xvf openshift-install-linux.tar.gz[AE1] 

[root@ocp-svc ~]# ./openshift-install version

./openshift-install 4.19.9

 

[root@ocp-svc ~]# tar xvf openshift-client-linux.tar.gz

[root@ocp-svc ~]# mv oc kubectl /usr/local/bin

6

Download iso, and raw.gz from

https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.19/latest/

 

1.      rhcos-4.19.23-x86_64-metal.x86_64.raw.gz[AE2] 

2.      rhcos-4.19.23-x86_64-live-iso.x86_64.iso[AE3] 

6

i

[root@ocp-svc ~]# ./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.location'

https://rhcos.mirror.openshift.com/art/storage/prod/streams/rhel-9.6/builds/9.6.20250523-0/x86_64/rhcos-9.6.20250523-0-live-iso.x86_64.iso

 

[root@ocp-svc ~]# ./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats."raw.gz".disk.location'

https://rhcos.mirror.openshift.com/art/storage/prod/streams/rhel-9.6/builds/9.6.20250523-0/x86_64/rhcos-9.6.20250523-0-metal.x86_64.raw.gz

 

 

done


 [AE2]Actual  OS

 [AE3]Boot image.

f)       Create manifests and ignition files   

1

We already have pasted our public key and pull secret and modified the install config file

vi /root/ocp-install/install-config.yaml,

modify number of :

master replicas: 3 (line

worker replicas: 0 (line 6)

networking NetworkType: OVNKubernetes (line 17)  instead of OpenShiftSDN

pullSecret: ‘copy fresh from oracle console’ (line 23)

sshKey:   ‘paste ur ssh public key from ~/.ssh/idxxx.pub’  (line 24)

 

Run this command to generate the manifests files in the ocp-install directory.

[root@ocp-svc ~]# ./openshift-install create manifests --dir ~/ocp-install/

1.1

In the manifests folder, update the cluster-scheduler-02-config.yml file and set mastersSchedulable: true.

This allows the master nodes to act as workers for application workloads. Accordingly, the DNS forward and reverse zones are configured to point worker entries to the master node IPs. Once dedicated worker nodes are available, they can be added alongside the masters.

 

2

Now create ignition files based on the manifests we created in the last step.

 

[root@ocp-svc ~]# ./openshift-install create ignition-configs --dir=/root/ocp-install/

 

Done. We are ready.


g)     Webserver

This node will function as the web server using Apache. All ignition and CoreOS files are hosted here, allowing it to serve the nodes for remote installation.

Move files to the root of webserver

[root@ocp-svc ~]# mkdir /var/www/html/ocp4

 

[root@ocp-svc ~]# ls ~/ocp-install

auth  bootstrap.ign  install-config.yaml.bak  master.ign  metadata.json  worker.ign

[root@ocp-svc ~]#

[root@ocp-svc ~]#

[root@ocp-svc ~]# cp -R ocp-install/* /var/www/html/ocp4/

[root@ocp-svc ~]# ls

anaconda-ks.cfg  ocp4-metal-install  ocp-install  ocp-install.manifests  openshift-install  rhcos-metal.x86_64.raw.gz

[root@ocp-svc ~]# mv rhcos-metal.x86_64.raw.gz /var/www/html/ocp4/rhcos

 

 

Make apache service account the owner

[root@ocp-svc ~]# chcon -R -t httpd_sys_content_t /var/www/html/ocp4/

chown -R apache: /var/www/html/ocp4/

chmod 755 /var/www/html/ocp4/

 

verify

[root@ocp-svc ~]# curl localhost:8080

 

[root@ocp-svc ~]# curl localhost:8080/ocp4

 

h)    Load Balancer (HAProxy)

In a UPI deployment, HAProxy is commonly used as the load balancer for API and Ingress traffic.

Config files

dnf install haproxy -y

\cp ~/ocp4-metal-install/haproxy.cfg /etc/haproxy/haproxy.cfg

 

/etc/haproxy/haproxy.cfg

 

  haproxy.cfg  install-config.yaml 

 

Master is scheduleable

In the load balancer configuration, under ocp_http_ingress_backend, the master nodes are added as backend workers since no dedicated worker nodes are available. As the masters are schedulable in this setup, both the HTTP (80) and HTTPS (443) backend sections are updated accordingly.

 

backend ocp_http_ingress_backend

    balance source

    mode tcp

    server      ocp-w-1 192.168.22.201:80 check

    server      ocp-w-2 192.168.22.212:80 check[AE1] 

 

backend ocp_https_ingress_backend

    mode tcp

    balance source

    server      ocp-w-1 192.168.22.201:443 check

    server      ocp-w-2 192.168.22.212:443 check[AE2] 

~

 

backend k8s_api_backend
server bootstrap 192.168.22.200:6443 check[AE3] 
server master1 192.168.22.201:6443 check
server master2 192.168.22.202:6443 check
server master3 192.168.22.203:6443 check

 

backend ocp_machine_config_server_backend

server bootstrap 192.168.22.200:22623 check[AE4] 
server master1 192.168.22.201:22623 check
server master2 192.168.22.202:22623 check
server master3 192.168.22.203:22623 check

 

start

setsebool -P haproxy_connect_any 1

systemctl enable haproxy

systemctl start haproxy

systemctl status haproxy


 [AE1] This can be added later, once we have a worker.

 [AE2]This can be added later, once we have a worker.

 [AE3]The first k8s api server is Bootstrap, but once we remove the bootstrap, we will also delete this line from the load balancer.

 [AE4]The first OCP_machine_config_server is also Bootstrap, but once we remove the bootstrap, we will also delete this line from the load balancer.

i) DHCP Server (Optional)

1

yum install dhcp-server

2

cp ~/ocp4-metal-install/dhcpd.conf /etc/dhcp/dhcpd.conf

3

Enter the MAC address of each VM’s network, along with their ip.

Like below example:

 

[root@ocp-svc ~]# vim /etc/dhcp/dhcpd.conf

 

host ocp-bootstrap {

 hardware ethernet 00:50:56:31:F7:AF;

 fixed-address 192.168.22.200;

}

 

host ocp-cp-1 {

 hardware ethernet 00:50:56:24:EE:2C;

 fixed-address 192.168.22.201;

}

 

host ocp-cp-2 {

 hardware ethernet 00:50:56:29:D2:A5;

 fixed-address 192.168.22.202;

}

 

host ocp-cp-3 {

 hardware ethernet 00:50:56:28:63:6E;

 fixed-address 192.168.22.203;

}

 

host ocp-w-1 {

 hardware ethernet 00:50:56:2D:54:D6;

 fixed-address 192.168.22.211;

}

 

host ocp-w-2 {

 hardware ethernet 00:50:56:2C:4A:D6;

 fixed-address 192.168.22.212;

}

 

 

j) NFS Server for file sharing

 

[root@ocp-svc ~]# yum install nfs-utils

 

mkdir -p /shares/registry

chown -R nobody:nobody /shares/registry

chmod -R 777 /shares/registry

 

echo "/shares/registry  192.168.22.0/24(rw,sync,root_squash,no_subtree_check,no_wdelay)" > /etc/exports

exportfs -rv

 

firewall-cmd --zone=internal --add-service mountd --permanent

firewall-cmd --zone=internal --add-service rpc-bind --permanent

firewall-cmd --zone=internal --add-service nfs --permanent

firewall-cmd –reload

 

systemctl enable nfs-server rpcbind

systemctl start nfs-server rpcbind nfs-mountd


2.    Bootstrap node:

This is a temporary bootstrap node. It is initially used to form a small OpenShift cluster, joins and assists in bootstrapping the master/control plane nodes, and is then removed once at least one control plane node is successfully up and operational.

1

Boot with iso we download.

Check hard drive namewith, lsblk  command

Check network devices with, ip link command

2

Setup network and hostname using

[core@localhost ~]# Sudo nmtui

Ip:192.168.22.200/24

Gw: 192.168.22.1

Dns: 192.168.22.1

Search domain: ocp.lan

Hostname: ocp-bootstrap

sudo nmcli con show

sudo nmcli con mod <connection-name> \

  ipv4.addresses 192.168.22.200/24 \

  ipv4.gateway 192.168.22.1 \

  ipv4.dns 192.168.22.1 \

  ipv4.dns-search ocp.lan \

  ipv4.method manual

 

sudo nmcli con up <connection-name>

sudo hostnamectl set-hostname ocp-bootstrap

 

3

-u

-I

Run installer

[core@localhost ~]# sudo coreos-installer install \

  --copy-network \

  --insecure \

  --insecure-ignition \

  --image-url=http://192.168.22.1:8080/ocp4/rhcos-4.15.6-x86_64-metal.raw.gz \

  --ignition-url=http://192.168.22.1:8080/ocp4/bootstrap.ign \

  /dev/nvme0n1

Note on 3

use --offline , then we don’t need signature file (.sig)

—offline means its air-gapped.

 

4

Reboot


  

api:6443 works after bootstrap API starts, while *.apps only works after ingress and workers are fully ready.

 

Api:6443

Control plane

 

*.apps

Ingress + worker

Before installation

curl https://api.ocp.lan:6443/version

·  No API server exists yet

·  HAProxy has no backend serving API

During bootstrap

curl https://api.ocp.lan:6443/version

It works ONLY IF:

  • bootstrap node has started kube-apiserver
  • HAProxy is correctly routing to bootstrap

And after the completion of bootstrap, it wil return Json

Bootstrap completed

curl https://api.ocp.lan:6443/version

"major": "1",
"minor": "29",
"gitVersion": "v1.xx.x"
}

Api server is alive, control plane is forming

After control plane is ready

curl -k https://api.lab.ocp.lan:6443/healthz

 

oc get nodes works.

And curl healthz returns OK

After ingress + workers

curl http://test.apps.lab.ocp.lan

Works ONLY when:

  • ingress controller is deployed
  • router pods are running
  • workers are ready (or routers scheduled on masters)

These two components (DNS and HAProxy) are critical in OpenShift UPI deployments, as a significant number of installation issues are typically caused by misconfigured DNS or load balancer settings.

 

3.    Master Nodes

1

Boot with iso we downloaded.

Check the hard drive name with lsblk  command

Check network devices with the ip link command

2

Setup network and hostname using

[core@localhost ~]# Sudo nmtui

Ip:192.168.22.201/24

Gw: 192.168.22.1

DNS: 192.168.22.1

Search domain: ocp.lan

Hostname: ocp-cp-1

3

Run installer

sudo coreos-installer install \

  --copy-network \

  --insecure \

  --insecure-ignition \

  --image-url=http://192.168.22.1:8080/ocp4/rhcos-4.15.6-x86_64-metal.raw.gz \

  --ignition-url=http://192.168.22.1:8080/ocp4/master.ign \

  /dev/nvme0n1

4

reboot

 

4.    Workers Nodes

1

Boot with the ISO we downloaded.

Check the hard drive name with the lsblk  command

Check network devices with, ip link command

2

Set up the network and hostname using

[core@localhost ~]# Sudo nmtui

IP:192.168.22.202/24

GW: 192.168.22.1

DNS: 192.168.22.1

Search domain: ocp.lan

Hostname: ocp-w-1

3

Run installer

sudo coreos-installer install \

  --copy-network \

  --insecure \

  --insecure-ignition \

  --image-url=http://192.168.22.1:8080/ocp4/rhcos-4.15.6-x86_64-metal.raw.gz \

  --ignition-url=http://192.168.22.1:8080/ocp4/worker.ign \

  /dev/nvme0n1

4

reboot

 

Optional:

Bash completion for OpenShift

 

oc completion bash > /etc/bash_completion.d/openshift

source /etc/bash_completion.d/openshift

oc completion bash > oc_bash_completion

sudo cp oc_bash_completion /etc/bash_completion.d/

openshift-install completion bash > /etc/bash_completion.d/openshift-install

source /etc/bash_completion.d/openshift-install

 

 

 

Check installation progress in the Service node

 

openshift-install --dir ocp-install/ wait-for bootstrap-complete --log-level=debug

 

 

openshift-install --dir ocp-install wait-for install-complete  --log-level=debug

 

Check progress in Bootstrap

 

ssh core@bootstrap

 

journalctl -b -f -u release-image.service -u bootkube.service

 

 

journalctl -f

 

Installed Operators

 

oc get clusteroperators.config.openshift.io

 

oc get subscriptions.operators.coreos.com --all-namespaces

 

Approve Worker Certificates if pending

 

oc get csr | grep -i pending

 

oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve

 

Access using CLI - Export Kubeconfig for CLI

export KUBECONFIG=~/ocp-install/auth/kubeconfig

Oc get nodes

 

Add Worker Nodes:

1

Adjust the DNS, HAProxy, and DHCP(MAC Address)  if not done earlier.

DNS forward: add new line for workers along with master1

; Worker Nodes

ocp-w-1.lab.ocp.lan.        IN      A      192.168.22.201

ocp-w-2.lab.ocp.lan.        IN      A      192.168.22.212

 

DNS Reverse zone:

201    IN    PTR    ocp-w-1.lab.ocp.lan.

212    IN    PTR    ocp-w-2.lab.ocp.lan.

 

HAProxy:

http(s)_ingress_backend

Add worker line , along with master.

 

    server      ocp-w-1 192.168.22.201:80 check

    server      ocp-w-2 192.168.22.212:80 check

 

    server      ocp-w-1 192.168.22.201:443 check

    server      ocp-w-2 192.168.22.212:443 check

2

Power on the worker02 node. And run the coreos-installer and reboot.

3

Wait and approve any CSR. oc get csr.

Access Verifications

1.      

Bootstrap complete?

openshift-install --dir ocp-install/ wait-for bootstrap-complete --log-level=debug

 

2

 

 

3


4

 

5

 

6


No comments:

Post a Comment