This guide explains how to deploy OpenShift on Bare Metal (UPI) in a VMware environment using a Service Node, Bootstrap Node, Control Plane, and Worker Nodes.
Log in to Redhat Console:
Goto - > Clusters -> Data Center -> Other Datacenter Options -> (Bare Mtail /x86_64)
References:
Requirements:
|
0 |
Service node/bastion |
Any linux, I am using Rocky |
2 interfaces: Eth0 , eth1 Eth0 for external, using NAT, automatic IP. Eth1 is internal hostonly. (192.168.22.1/24) |
|
Services |
Firewall/router, DNS, webserver, ingress |
||
|
Openshift Installer |
Installer script from console.redhat |
||
|
Pull secret |
From redhat console |
||
|
Cli agent tools |
Client tools |
||
|
Redhat core os iso file to boot |
Rhcos*.io |
||
|
Redhat core os raw.gz file |
Rhcos*.raw.gz |
||
|
Ignition files |
Run with raw.gz files on nodes with the coreos-installer command |
||
|
|
NAT, IP forwarding |
Enable using firewalld |
From internal to external |
|
1 |
Bootstrap node |
based on the installer |
nmtui:ip 22.200, gw 22.1, ns: 22.1, search domain: ocp.lan |
|
2 |
Masters |
based on the installer |
nmtui:ip 22.201-203, gw 22.1, ns: 22.1, search domain: ocp.lan |
|
3 |
Workers |
based on the installer |
nmtui :ip 22.211-213, gw 22.1, ns: 22.1, search domain: ocp.lan |
|
4 |
VMware VMX files |
In all VMs' vmx file, add this before booting |
Disk.EnableUUID = "TRUE" |
|
5 |
Vm storage |
Check drive type ·
SCSI / SATA controllers →
Devices show up as /dev/sdX. ·
NVMe controllers →
Devices show up as /dev/nvmeXnY |
lsblk |
1.
Service Node Setup:
a)
Network and Firewall
The service node is our gateway, DNS server, web server, firewall, and router. It has 2 interfaces, and it will forward the traffic coming from the internal network towards the external and vice versa.
|
Setup ip |
Nmtui |
nmtui |
|
Nmtui to set
ip Ens192:
automatic ip |
Ens224:
192.168.22.1/24 Dns:
127.0.0.1 Search
domain: ocp.lan Enable: Never
use this network for default route nmcli
connection up ens224 |
|
|
Assigning
zones |
[root@ocp-svc
~]# nmcli connection modify ens192 connection.zone external [root@ocp-svc
~]# nmcli connection modify ens224 connection.zone internal [root@ocp-svc
~]# firewall-cmd --get-active-zones |
|
|
Enable nat |
[root@ocp-svc
~]# firewall-cmd --zone=external --add-masquerade --permanent [root@ocp-svc
~]# firewall-cmd --zone=internal --add-masquerade --permanent |
|
|
Enable packet
forwarding |
[root@ocp-svc
~]# cat /proc/sys/net/ipv4/ip_forward [root@ocp-svc
~]# firewall-cmd --ddirect --add-rule ipv4 filter FORWARD 0 -I ens224 -o
ens192 -j ACCEPT --permanent [root@ocp-svc
~]# firewall-cmd --ddirect --add-rule ipv4 filter FORWARD 0 -o ens224 -i
ens192 -m state --state RELATED,ESTABLISHED -j ACCEPT –permanent [root@ocp-svc
~]# cat /proc/sys/net/ipv4/ip_forward 1 |
|
|
verify |
firewall-cmd
--reload firewall-cmd
--list-all --zone=internal firewall-cmd
--list-all --zone=external firewall-cmd
--get-active-zones |
|
b)
Git repo
|
1. |
Install git
and clone the repository [root@ocp-svc
~]# https://github.com/ryanhay/ocp4-metal-install.git [root@ocp-svc
~]# ls ocp4-metal-install/ dhcpd.conf diagram
dns haproxy.cfg install-config.yaml manifest
README.md |
c)
DNS
This is our DNS (bind) server, and we will use the Named
service.
|
Install the bind
server |
[root@ocp-svc
~]# install bind bind-utils -y |
|
copy configs |
Copy config
files and zone folder (forward and reverse.zone) [root@ocp-svc
~]# cp ~/ocp4-metal-install/dns/named.conf /etc/named.conf [root@ocp-svc
~]# cp -R ~/ocp4-metal-install/dns/zones /etc/named/ |
|
restart |
[root@ocp-svc
~]# systemctl enable named [root@ocp-svc
~]# systemctl start named [root@ocp-svc
~]# systemctl status named |
At this stage, change the DNS on the external network
interface also, and point it to 127.0.0.1
|
Check |
dig ocp.lan
-> 8.8.8.8 |
|
Change |
Dns:
127.0.0.1 Enable: Ignore
automatically Obtained DNS parameters |
|
Verify |
dig ocp.lan
-> ocp-bootstrap.ocp.lan dig -x
192.168.22.200 - > ocp-bootstrap.ocp.lan |
d)
Firewall
Use these firewall commands to enable ports
|
|
|
|
|
firewall-cmd
--add-port=53/udp --zone=internal --permanent firewall-cmd
--add-port=53/tcp --zone=internal --permanent firewall-cmd
--add-port=8080/tcp --zone=internal --permanent firewall-cmd
--add-port=6443/tcp --zone=internal --permanent # kube-api-server on control
plane nodes firewall-cmd
--add-port=6443/tcp --zone=external --permanent # kube-api-server on control
plane nodes firewall-cmd
--add-port=22623/tcp --zone=internal --permanent # machine-config server firewall-cmd
--add-service=http --zone=internal --permanent # web services hosted on
worker nodes firewall-cmd
--add-service=http --zone=external --permanent # web services hosted on
worker nodes firewall-cmd
--add-service=https --zone=internal --permanent # web services hosted on
worker nodes firewall-cmd
--add-service=https --zone=external --permanent # web services hosted on
worker nodes firewall-cmd
--add-port=9000/tcp --zone=external --permanent # HAProxy Stats firewall-cmd
--reload [root@ocp-svc
~]# firewall-cmd --zone=internal --set-target=ACCEPT --permanent firewall-cmd
--reload |
e)
Openshift Installer
|
2 |
Make a
separate folder for installation files. ocp-install folder [root@ocp-svc
~]# mkdir ~/ocp-install [root@ocp-svc
~]# cp ocp4-metal-install/install-config.yaml ocp-install/ [root@ocp-svc
~]# vi ocp-install/install-config.yaml |
|
3. |
create ssh
key pair and paste it in the file ocp-install/install-config.yaml [root@ocp-svc
~]# ssh-keygen Enter ,
enter, enter finish. [root@ocp-svc
~]# cat ~/.ssh/id_rsa.pub |
|
4. |
Copy pull
secret from redhat console and paste in the file ocp-install/install-config.yaml |
|
5 |
Download
openshift installer and extract it directly to your home. I'm logged in as root. So ~ is /root [root@ocp-svc
~]# wget https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-install-linux.tar.gz [root@ocp-svc
~]# tar xvf openshift-install-linux.tar.gz |
|
6 |
Download
command line tools, extract and move them to /usr/local/bin [root@ocp-svc
~]# wget https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-client-linux.tar.gz [root@ocp-svc
~]# tar xvf openshift-client-linux.tar.gz [root@ocp-svc
~]# mv oc kubectl /usr/local/bin |
|
7 |
Download
Operating System files (iso, raw.gz), based on the installer we just
downloaded in step 5. Run the command below to get the links to download the ISO
and Raw.gz [root@ocp-svc
~]# ./openshift-install version ./openshift-install
4.19.9 built from
commit 249d7428c51fe85cabdab6394c4aed86fc24d398 release image
quay.io/openshift-release-dev/ocp-release@sha256:b6f3a6e7cab0bb6e2590f6e6612a3edec75e3b28d32a4e55325bdeeb7d836662 Release
architecture amd64 [root@ocp-svc
~]# ./openshift-install coreos print-stream-json | jq -r
'.architectures.x86_64.artifacts.metal.formats.iso.disk.location' https://rhcos.mirror.openshift.com/art/storage/prod/streams/rhel-9.6/builds/9.6.20250523-0/x86_64/rhcos-9.6.20250523-0-live-iso.x86_64.iso [root@ocp-svc
~]# ./openshift-install coreos print-stream-json | jq -r
'.architectures.x86_64.artifacts.metal.formats."raw.gz".disk.location' |
|
|
done |
f)
Create manifests and
ignition files
|
1 |
We have already pasted our public key and pull secret and modified the install config
file (/root/ocp-install/install-config.yaml), like the number of master
and worker nodes, platform (no need
for bare metal), etc. Run this
command to generate the manifests files in the ocp-install directory. [root@ocp-svc
~]# ./openshift-install create manifests --dir ~/ocp-install/ |
|
2 |
Now create
ignition files based on the manifests we created in the last step. [root@ocp-svc
~]# ./openshift-install create ignition-configs --dir=/root/ocp-install/ |
|
|
Done. We are
ready. |
g)
Webserver
This is our webserver and we will use Apache. And we copy
all over ignition and core os files here. It will serve the nodes for remote
installation
|
Move files to
the root of webserver |
[root@ocp-svc
~]# mkdir /var/www/html/ocp4 [root@ocp-svc
~]# ls ~/ocp-install auth bootstrap.ign install-config.yaml.bak master.ign
metadata.json worker.ign [root@ocp-svc
~]# [root@ocp-svc
~]# [root@ocp-svc
~]# cp -R ocp-install/* /var/www/html/ocp4/ [root@ocp-svc
~]# ls anaconda-ks.cfg ocp4-metal-install ocp-install
ocp-install.manifests
openshift-install
rhcos-metal.x86_64.raw.gz [root@ocp-svc
~]# mv rhcos-metal.x86_64.raw.gz /var/www/html/ocp4/rhcos |
|
Make apache
service account the owner |
[root@ocp-svc
~]# chcon -R -t httpd_sys_content_t /var/www/html/ocp4/ chown -R
apache: /var/www/html/ocp4/ chmod 755
/var/www/html/ocp4/ |
|
verify |
[root@ocp-svc
~]# curl localhost:8080 [root@ocp-svc
~]# curl localhost:8080/ocp4 |
h)
Ingress/Router
In a UPI deployment, HAProxy is commonly used as the load balancer for API and Ingress traffic.
|
|
|
|
|
haproxy.cfg install-config.yaml |
2.
Bootstrap node:
This is a temporary node, it becomes a small OpenShift
cluster first, joins other master nodes, delegates them, and then we can remove it once at least 1 control plan is up.
|
1 |
Boot with iso
we download. Check the hard
drive name with lsblk command Check network
devices with, ip link command |
|
2 |
Setup
network and hostname using [core@localhost
~]# Sudo nmtui Ip:192.168.22.200/24 Gw:
192.168.22.1 DNS:
192.168.22.1 Search
domain: ocp.lan Hostname:
ocp-bootstrap |
|
3 |
Run
installer [core@localhost
~]# sudo coreos-installer install \ --copy-network \ --insecure \ --insecure-ignition \
--image-url=http://192.168.22.1:8080/ocp4/rhcos-4.15.6-x86_64-metal.raw.gz
\
--ignition-url=http://192.168.22.1:8080/ocp4/bootstrap.ign \ /dev/nvme0n1 |
|
4 |
Reboot |
3.
Master Nodes
|
1 |
Boot with iso
we downloaded. Check the hard
drive name with lsblk command Check network
devices with the ip link command |
|
2 |
Setup
network and hostname using [core@localhost
~]# Sudo nmtui Ip:192.168.22.201/24 Gw:
192.168.22.1 DNS:
192.168.22.1 Search
domain: ocp.lan Hostname:
ocp-cp-1 |
|
3 |
Run
installer sudo
coreos-installer install \ --copy-network \ --insecure \ --insecure-ignition \
--image-url=http://192.168.22.1:8080/ocp4/rhcos-4.15.6-x86_64-metal.raw.gz
\
--ignition-url=http://192.168.22.1:8080/ocp4/master.ign \ /dev/nvme0n1 |
|
4 |
reboot |
4.
Workers Nodes
|
1 |
Boot with iso
we download. Check the hard
drive name with the lsblk command Check network
devices with, ip link command |
|
2 |
Setup
network and hostname using [core@localhost
~]# Sudo nmtui Ip:192.168.22.202/24 Gw:
192.168.22.1 Dns:
192.168.22.1 Search
domain: ocp.lan Hostname:
ocp-w-1 |
|
3 |
Run
installer sudo
coreos-installer install \ --copy-network \ --insecure \ --insecure-ignition \
--image-url=http://192.168.22.1:8080/ocp4/rhcos-4.15.6-x86_64-metal.raw.gz
\
--ignition-url=http://192.168.22.1:8080/ocp4/worker.ign \ /dev/nvme0n1 |
|
4 |
reboot |
Optional:
Bash completion for OpenShift
|
|
oc completion
bash > /etc/bash_completion.d/openshift source
/etc/bash_completion.d/openshift oc completion
bash > oc_bash_completion sudo cp
oc_bash_completion /etc/bash_completion.d/ openshift-install
completion bash > /etc/bash_completion.d/openshift-install source
/etc/bash_completion.d/openshift-install |
|
|
|
Check installation progress in the Service node
|
|
openshift-install
--dir ocp-install/ wait-for bootstrap-complete --log-level=debug |
|
|
openshift-install
--dir ocp-install wait-for install-complete
--log-level=debug |
Check progress in Bootstrap
|
|
ssh core@bootstrap |
|
|
journalctl -b
-f -u release-image.service -u bootkube.service |
|
|
journalctl -f |
Installed Operators
|
|
oc get
clusteroperators.config.openshift.io |
|
|
oc get
subscriptions.operators.coreos.com --all-namespaces |
Approve Worker Certificates if pending
|
|
oc get csr |
grep -i pending |
|
|
oc get csr -o
go-template='{{range .items}}{{if not
.status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm
certificate approve |
Access using CLI - Export Kubeconfig for CLI
|
export
KUBECONFIG=~/ocp-install/auth/kubeconfig |
|
Oc get nodes |
No comments:
Post a Comment