The Agent-based installation method provides the flexibility to boot your on-premises servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. Agent-based installation is a subcommand of the OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an OpenShift Container Platform cluster, with an available release image.
The configuration is in the same format as for the installer-provisioned infrastructure and user-provisioned infrastructure installation methods. The Agent-based Installer can also optionally generate or accept Zero Touch Provisioning (ZTP) custom resources. ZTP allows you to provision new edge sites with declarative configurations of bare-metal equipment.
CPU architecture | Connected installation | Disconnected installation | Comments |
---|---|---|---|
|
✓ |
✓ |
|
|
✓ |
✓ |
|
|
✓ |
✓ |
|
|
✓ |
✓ |
ISO boot is not supported. Instead, use PXE assets. |
As an OpenShift Container Platform user, you can leverage the advantages of the Assisted Installer hosted service in disconnected environments.
The Agent-based installation comprises a bootable ISO that contains the Assisted discovery agent and the Assisted Service. Both are required to perform the cluster installation, but the latter runs on only one of the hosts.
Currently, ISO boot is not supported on IBM Z® ( |
The openshift-install agent create image
subcommand generates an ephemeral ISO based on the inputs that you provide. You can choose to provide inputs through the following manifests:
Preferred:
install-config.yaml
agent-config.yaml
or
Optional: ZTP manifests
cluster-manifests/cluster-deployment.yaml
cluster-manifests/agent-cluster-install.yaml
cluster-manifests/pull-secret.yaml
cluster-manifests/infraenv.yaml
cluster-manifests/cluster-image-set.yaml
cluster-manifests/nmstateconfig.yaml
mirror/registries.conf
mirror/ca-bundle.crt
One of the control plane hosts runs the Assisted Service at the start of the boot process and eventually becomes the bootstrap host. This node is called the rendezvous host (node 0). The Assisted Service ensures that all the hosts meet the requirements and triggers an OpenShift Container Platform cluster deployment. All the nodes have the Red Hat Enterprise Linux CoreOS (RHCOS) image written to the disk. The non-bootstrap nodes reboot and initiate a cluster deployment. Once the nodes are rebooted, the rendezvous host reboots and joins the cluster. The bootstrapping is complete and the cluster is deployed.
You can install a disconnected OpenShift Container Platform cluster through the openshift-install agent create image
subcommand for the following topologies:
A single-node OpenShift Container Platform cluster (SNO): A node that is both a master and worker.
A three-node OpenShift Container Platform cluster : A compact cluster that has three master nodes that are also worker nodes.
Highly available OpenShift Container Platform cluster (HA): Three master nodes with any number of worker nodes.
Recommended cluster resources for the following topologies:
Topology | Number of control plane nodes | Number of compute nodes | vCPU | Memory | Storage |
---|---|---|---|---|---|
Single-node cluster |
1 |
0 |
8 vCPUs |
16 GB of RAM |
120 GB |
Compact cluster |
3 |
0 or 1 |
8 vCPUs |
16 GB of RAM |
120 GB |
HA cluster |
3 |
2 and above |
8 vCPUs |
16 GB of RAM |
120 GB |
In the install-config.yaml
, specify the platform on which to perform the installation. The following platforms are supported:
baremetal
vsphere
none
For platform
|
For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization’s corporate governance framework. Federal Information Processing Standards (FIPS) compliance is one of the most critical components required in highly secure environments to ensure that only supported cryptographic technologies are allowed on nodes.
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. |
During a cluster deployment, the Federal Information Processing Standards (FIPS) change is applied when the Red Hat Enterprise Linux CoreOS (RHCOS) machines are deployed in your cluster. For Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines.
You can enable FIPS mode through the preferred method of install-config.yaml
and agent-config.yaml
:
You must set value of the fips
field to True
in the install-config.yaml
file:
apiVersion: v1
baseDomain: test.example.com
metadata:
name: sno-cluster
fips: True
Optional: If you are using the GitOps ZTP manifests, you must set the value of fips
as True
in the Agent-install.openshift.io/install-config-overrides
field in the agent-cluster-install.yaml
file:
apiVersion: extensions.hive.openshift.io/v1beta1
kind: AgentClusterInstall
metadata:
annotations:
agent-install.openshift.io/install-config-overrides: '{"fips": True}'
name: sno-cluster
namespace: sno-cluster-test
You can make additional configurations for each host on the cluster in the agent-config.yaml
file, such as network configurations and root device hints.
For each host you configure, you must provide the MAC address of an interface on the host to specify which host you are configuring. |
Each host in the cluster is assigned a role of either master
or worker
.
You can define the role for each host in the agent-config.yaml
file by using the role
parameter.
If you do not assign a role to the hosts, the roles will be assigned at random during installation.
It is recommended to explicitly define roles for your hosts.
The rendezvousIP
must be assigned to a host with the master
role. This can be done manually or by allowing the Agent-based Installer to assign the role.
You do not need to explicitly define the For example, if you have 4 hosts with 3 of the hosts explicitly defined to have the |
apiVersion: v1beta1
kind: AgentConfig
metadata:
name: example-cluster
rendezvousIP: 192.168.111.80
hosts:
- hostname: master-1
role: master
interfaces:
- name: eno1
macAddress: 00:ef:44:21:e6:a5
- hostname: master-2
role: master
interfaces:
- name: eno1
macAddress: 00:ef:44:21:e6:a6
- hostname: master-3
role: master
interfaces:
- name: eno1
macAddress: 00:ef:44:21:e6:a7
- hostname: worker-1
role: worker
interfaces:
- name: eno1
macAddress: 00:ef:44:21:e6:a8
The rootDeviceHints
parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it.
Subfield | Description |
---|---|
|
A string containing a Linux device name such as |
|
A string containing a SCSI bus address like |
|
A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. |
|
A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. |
|
A string containing the device serial number. The hint must match the actual value exactly. |
|
An integer representing the minimum size of the device in gigabytes. |
|
A string containing the unique storage identifier. The hint must match the actual value exactly. |
|
A boolean indicating whether the device should be a rotating disk (true) or not (false). |
- name: master-0
role: master
rootDeviceHints:
deviceName: "/dev/sda"
The rendezvous IP must be known at the time of generating the agent ISO, so that during the initial boot all the hosts can check in to the assisted service.
If the IP addresses are assigned using a Dynamic Host Configuration Protocol (DHCP) server, then the rendezvousIP
field must be set to an IP address of one of the hosts that will become part of the deployed control plane.
In an environment without a DHCP server, you can define IP addresses statically.
In addition to static IP addresses, you can apply any network configuration that is in NMState format. This includes VLANs and NIC bonds.
install-config.yaml
and agent-config.yaml
You must specify the value for the rendezvousIP
field. The networkConfig
fields can be left blank:
apiVersion: v1alpha1
kind: AgentConfig
metadata:
name: sno-cluster
rendezvousIP: 192.168.111.80 (1)
1 | The IP address for the rendezvous host. |
Preferred method: install-config.yaml
and agent-config.yaml
cat > agent-config.yaml << EOF
apiVersion: v1alpha1
kind: AgentConfig
metadata:
name: sno-cluster
rendezvousIP: 192.168.111.80 (1)
hosts:
- hostname: master-0
interfaces:
- name: eno1
macAddress: 00:ef:44:21:e6:a5 (2)
networkConfig:
interfaces:
- name: eno1
type: ethernet
state: up
mac-address: 00:ef:44:21:e6:a5
ipv4:
enabled: true
address:
- ip: 192.168.111.80 (3)
prefix-length: 23 (4)
dhcp: false
dns-resolver:
config:
server:
- 192.168.111.1 (5)
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.111.1 (6)
next-hop-interface: eno1
table-id: 254
EOF
1 | If a value is not specified for the rendezvousIP field, one address will be chosen from the static IP addresses specified in the networkConfig fields. |
2 | The MAC address of an interface on the host, used to determine which host to apply the configuration to. |
3 | The static IP address of the target bare metal host. |
4 | The static IP address’s subnet prefix for the target bare metal host. |
5 | The DNS server for the target bare metal host. |
6 | Next hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. |
Optional method: GitOps ZTP manifests
The optional method of the GitOps ZTP custom resources comprises 6 custom resources; you can configure static IPs in the nmstateconfig.yaml
file.
apiVersion: agent-install.openshift.io/v1beta1
kind: NMStateConfig
metadata:
name: master-0
namespace: openshift-machine-api
labels:
cluster0-nmstate-label-name: cluster0-nmstate-label-value
spec:
config:
interfaces:
- name: eth0
type: ethernet
state: up
mac-address: 52:54:01:aa:aa:a1
ipv4:
enabled: true
address:
- ip: 192.168.122.2 (1)
prefix-length: 23 (2)
dhcp: false
dns-resolver:
config:
server:
- 192.168.122.1 (3)
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.122.1 (4)
next-hop-interface: eth0
table-id: 254
interfaces:
- name: eth0
macAddress: 52:54:01:aa:aa:a1 (5)
1 | The static IP address of the target bare metal host. |
2 | The static IP address’s subnet prefix for the target bare metal host. |
3 | The DNS server for the target bare metal host. |
4 | Next hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. |
5 | The MAC address of an interface on the host, used to determine which host to apply the configuration to. |
The rendezvous IP is chosen from the static IP addresses specified in the config
fields.
This section describes the requirements for an Agent-based OpenShift Container Platform installation that is configured to use the platform none
option.
Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. |
In OpenShift Container Platform deployments, DNS name resolution is required for the following components:
The Kubernetes API
The OpenShift Container Platform application wildcard
The control plane and compute machines
Reverse DNS resolution is also required for the Kubernetes API, the control plane machines, and the compute machines.
DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate.
It is recommended to use a DHCP server to provide the hostnames to each cluster node. |
The following DNS records are required for an OpenShift Container Platform cluster using the platform none
option and they must be in place before installation. In each record, <cluster_name>
is the cluster name and <base_domain>
is the base domain that you specify in the install-config.yaml
file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>.
.
Component | Record | Description | |
---|---|---|---|
Kubernetes API |
|
A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
|
|
A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster.
|
||
Routes |
|
A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, |
|
Control plane machines |
|
DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. |
|
Compute machines |
|
DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. |
In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. |
You can use the |
This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform using the platform none
option. The samples are not meant to provide advice for choosing one DNS solution over another.
In the examples, the cluster name is ocp4
and the base domain is example.com
.
The following example is a BIND zone file that shows sample A records for name resolution in a cluster using the platform none
option.
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
IN MX 10 smtp.example.com.
;
;
ns1.example.com. IN A 192.168.1.5
smtp.example.com. IN A 192.168.1.5
;
helper.example.com. IN A 192.168.1.5
helper.ocp4.example.com. IN A 192.168.1.5
;
api.ocp4.example.com. IN A 192.168.1.5 (1)
api-int.ocp4.example.com. IN A 192.168.1.5 (2)
;
*.apps.ocp4.example.com. IN A 192.168.1.5 (3)
;
master0.ocp4.example.com. IN A 192.168.1.97 (4)
master1.ocp4.example.com. IN A 192.168.1.98 (4)
master2.ocp4.example.com. IN A 192.168.1.99 (4)
;
worker0.ocp4.example.com. IN A 192.168.1.11 (5)
worker1.ocp4.example.com. IN A 192.168.1.7 (5)
;
;EOF
1 | Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. | ||
2 | Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. | ||
3 | Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
|
||
4 | Provides name resolution for the control plane machines. | ||
5 | Provides name resolution for the compute machines. |
The following example BIND zone file shows sample PTR records for reverse name resolution in a cluster using the platform none
option.
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
;
5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. (1)
5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. (2)
;
97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. (3)
98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. (3)
99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. (3)
;
11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. (4)
7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. (4)
;
;EOF
1 | Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. |
2 | Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. |
3 | Provides reverse DNS resolution for the control plane machines. |
4 | Provides reverse DNS resolution for the compute machines. |
A PTR record is not required for the OpenShift Container Platform application wildcard. |
Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
These requirements do not apply to single-node OpenShift clusters using the platform |
If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. |
The load balancing infrastructure must meet the following requirements:
API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes.
A stateless load balancing algorithm. The options vary based on the load balancer implementation.
Do not configure session persistence for an API load balancer. |
Configure the following ports on both the front and back of the load balancers:
Port | Back-end machines (pool members) | Internal | External | Description |
---|---|---|---|---|
|
Control plane. You must configure the |
X |
X |
Kubernetes API server |
|
Control plane. |
X |
Machine config server |
The load balancer must be configured to take a maximum of 30 seconds from the
time the API server turns off the |
Application Ingress load balancer: Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster.
Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes.
A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.
If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. |
Configure the following ports on both the front and back of the load balancers:
Port | Back-end machines (pool members) | Internal | External | Description |
---|---|---|---|---|
|
The machines that run the Ingress Controller pods, compute, or worker, by default. |
X |
X |
HTTPS traffic |
|
The machines that run the Ingress Controller pods, compute, or worker, by default. |
X |
X |
HTTP traffic |
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. |
This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters using the platform none
option. The sample is an /etc/haproxy/haproxy.cfg
configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
If you are using HAProxy as a load balancer and SELinux is set to |
global
log 127.0.0.1 local2
pidfile /var/run/haproxy.pid
maxconn 4000
daemon
defaults
mode http
log global
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen api-server-6443 (1)
bind *:6443
mode tcp
server master0 master0.ocp4.example.com:6443 check inter 1s
server master1 master1.ocp4.example.com:6443 check inter 1s
server master2 master2.ocp4.example.com:6443 check inter 1s
listen machine-config-server-22623 (2)
bind *:22623
mode tcp
server master0 master0.ocp4.example.com:22623 check inter 1s
server master1 master1.ocp4.example.com:22623 check inter 1s
server master2 master2.ocp4.example.com:22623 check inter 1s
listen ingress-router-443 (3)
bind *:443
mode tcp
balance source
server worker0 worker0.ocp4.example.com:443 check inter 1s
server worker1 worker1.ocp4.example.com:443 check inter 1s
listen ingress-router-80 (4)
bind *:80
mode tcp
balance source
server worker0 worker0.ocp4.example.com:80 check inter 1s
server worker1 worker1.ocp4.example.com:80 check inter 1s
1 | Port 6443 handles the Kubernetes API traffic and points to the control plane machines. |
||
2 | Port 22623 handles the machine config server traffic and points to the control plane machines. |
||
3 | Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. |
||
4 | Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
|
If you are using HAProxy as a load balancer, you can check that the |
The following agent-config.yaml
file is an example of a manifest for bond and VLAN interfaces.
apiVersion: v1alpha1
kind: AgentConfig
rendezvousIP: 10.10.10.14
hosts:
- hostname: master0
role: master
interfaces:
- name: enp0s4
macAddress: 00:21:50:90:c0:10
- name: enp0s5
macAddress: 00:21:50:90:c0:20
networkConfig:
interfaces:
- name: bond0.300 (1)
type: vlan (2)
state: up
vlan:
base-iface: bond0
id: 300
ipv4:
enabled: true
address:
- ip: 10.10.10.14
prefix-length: 24
dhcp: false
- name: bond0 (1)
type: bond (3)
state: up
mac-address: 00:21:50:90:c0:10 (4)
ipv4:
enabled: false
ipv6:
enabled: false
link-aggregation:
mode: active-backup (5)
options:
miimon: "150" (6)
port:
- enp0s4
- enp0s5
dns-resolver: (7)
config:
server:
- 10.10.10.11
- 10.10.10.12
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 10.10.10.10 (8)
next-hop-interface: bond0.300 (9)
table-id: 254
1 | Name of the interface. |
2 | The type of interface. This example creates a VLAN. |
3 | The type of interface. This example creates a bond. |
4 | The mac address of the interface. |
5 | The mode attribute specifies the bonding mode. |
6 | Specifies the MII link monitoring frequency in milliseconds. This example inspects the bond link every 150 milliseconds. |
7 | Optional: Specifies the search and server settings for the DNS server. |
8 | Next hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. |
9 | Next hop interface for the node traffic. |
Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
The following agent-config.yaml
file is an example of a manifest for dual port NIC with a bond and SR-IOV interfaces:
apiVersion: v1alpha1
kind: AgentConfig
rendezvousIP: 10.10.10.14
hosts:
- hostname: worker-1
interfaces:
- name: eno1
macAddress: 0c:42:a1:55:f3:06
- name: eno2
macAddress: 0c:42:a1:55:f3:07
networkConfig: (1)
interfaces: (2)
- name: eno1 (3)
type: ethernet (4)
state: up
mac-address: 0c:42:a1:55:f3:06
ipv4:
enabled: true
dhcp: false (5)
ethernet:
sr-iov:
total-vfs: 2 (6)
ipv6:
enabled: false
- name: sriov:eno1:0
type: ethernet
state: up (7)
ipv4:
enabled: false (8)
ipv6:
enabled: false
dhcp: false
- name: sriov:eno1:1
type: ethernet
state: down
- name: eno2
type: ethernet
state: up
mac-address: 0c:42:a1:55:f3:07
ipv4:
enabled: true
ethernet:
sr-iov:
total-vfs: 2
ipv6:
enabled: false
- name: sriov:eno2:0
type: ethernet
state: up
ipv4:
enabled: false
ipv6:
enabled: false
- name: sriov:eno2:1
type: ethernet
state: down
- name: bond0
type: bond
state: up
min-tx-rate: 100 (9)
max-tx-rate: 200 (10)
link-aggregation:
mode: active-backup (11)
options:
primary: sriov:eno1:0 (12)
port:
- sriov:eno1:0
- sriov:eno2:0
ipv4:
address:
- ip: 10.19.16.57 (13)
prefix-length: 23
dhcp: false
enabled: true
ipv6:
enabled: false
dns-resolver:
config:
server:
- 10.11.5.160
- 10.2.70.215
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 10.19.17.254
next-hop-interface: bond0 (14)
table-id: 254
1 | The networkConfig field contains information about the network configuration of the host, with subfields including interfaces ,dns-resolver , and routes . |
2 | The interfaces field is an array of network interfaces defined for the host. |
3 | The name of the interface. |
4 | The type of interface. This example creates an ethernet interface. |
5 | Set this to false to disable DHCP for the physical function (PF) if it is not strictly required. |
6 | Set this to the number of SR-IOV virtual functions (VFs) to instantiate. |
7 | Set this to up . |
8 | Set this to false to disable IPv4 addressing for the VF attached to the bond. |
9 | Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps.
|
10 | Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. |
11 | Sets the desired bond mode. |
12 | Sets the preferred port of the bonding interface. The primary device is the first of the bonding interfaces to be used and is not abandoned unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). |
13 | Sets a static IP address for the bond interface. This is the node IP address. |
14 | Sets bond0 as the gateway for the default route. |
You can customize the install-config.yaml
file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
apiVersion: v1
baseDomain: example.com (1)
compute: (2)
- name: worker
replicas: 0 (3)
controlPlane: (2)
name: master
replicas: 1 (4)
metadata:
name: sno-cluster (5)
networking:
clusterNetwork:
- cidr: 10.128.0.0/14 (6)
hostPrefix: 23 (7)
networkType: OVNKubernetes (8)
serviceNetwork: (9)
- 172.30.0.0/16
platform:
none: {} (10)
fips: false (11)
pullSecret: '{"auths": ...}' (12)
sshKey: 'ssh-ed25519 AAAA...' (13)
1 | The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. | ||
2 | The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. |
||
3 | This parameter controls the number of compute machines that the Agent-based installation waits to discover before triggering the installation process. It is the number of compute machines that must be booted with the generated ISO.
|
||
4 | The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. | ||
5 | The cluster name that you specified in your DNS records. | ||
6 | A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic.
|
||
7 | The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. |
||
8 | The cluster network plugin to install. The default value OVNKubernetes is the only supported value. |
||
9 | The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. | ||
10 | You must set the platform to none for a single-node cluster. You can set the platform to vsphere , baremetal , or none for multi-node clusters.
|
||
11 | Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
|
||
12 | This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. | ||
13 | The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS).
|
The Agent-based Installer performs validation checks on user defined YAML files before the ISO is created. Once the validations are successful, the agent ISO is created.
install-config.yaml
baremetal
, vsphere
and none
platforms are supported.
The networkType
parameter must be OVNKubernetes
in the case of none
platform.
apiVIPs
and ingressVIPs
parameters must be set for bare metal and vSphere platforms.
Some host-specific fields in the bare metal platform configuration that have equivalents in agent-config.yaml
file are ignored. A warning message is logged if these fields are set.
agent-config.yaml
Each interface must have a defined MAC address. Additionally, all interfaces must have a different MAC address.
At least one interface must be defined for each host.
World Wide Name (WWN) vendor extensions are not supported in root device hints.
The role
parameter in the host
object must have a value of either master
or worker
.