Il faut disposer d’une machine qui va initier le process.
OCP_VERSION=<ocp_version>
Exemple : latest-4.16
ARCH=<architecture>
Architecture peut prendre les valeurs suivantes : aarch64 / x86_64 / ppc64.
curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz
tar zxf oc.tar.gz
chmod +x oc
Download the OpenShift Container Platform installer and make it available for use by entering the following commands:
$ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz
$ tar zxvf openshift-install-linux.tar.gz
$ chmod +x openshift-install
Retrieve the RHCOS ISO URL by running the following command:
$ ISO_URL=$(./openshift-install coreos print-stream-json | grep location | grep $ARCH | grep iso | cut -d\" -f4)
Download the RHCOS ISO:
$ curl -L $ISO_URL -o rhcos-live.iso
Prepare the install-config.yaml file:
apiVersion: v1
baseDomain: <domain>
compute:
- name: worker
replicas: 0
controlPlane:
name: master
replicas: 1
metadata:
name: <name>
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
bootstrapInPlace:
installationDisk: /dev/disk/by-id/<disk_id>
pullSecret: '<pull_secret>'
sshKey: |
<ssh_key>
Add the cluster domain name.
Set the compute replicas to 0. This makes the control plane node schedulable.
Set the controlPlane replicas to 1. In conjunction with the previous compute setting, this setting ensures the cluster runs on a single node.
Set the metadata name to the cluster name.
Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters.
Set the cidr value to match the subnet of the single-node OpenShift cluster.
Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2.
Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting.
Add the public SSH key from the administration host so that you can log in to the cluster after installation.
Generate OpenShift Container Platform assets by running the following commands:
$ mkdir ocp
$ cp install-config.yaml ocp
$ ./openshift-install --dir=ocp create single-node-ignition-config
Embed the ignition data into the RHCOS ISO by running the following commands:
$ alias coreos-installer='podman run --privileged --pull always --rm \
-v /dev:/dev -v /run/udev:/run/udev -v $PWD:/data \
-w /data quay.io/coreos/coreos-installer:release'
$ coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso
Additional resources
See Requirements for installing OpenShift on a single node for more information about installing OpenShift Container Platform on a single node.
See Enabling cluster capabilities for more information about enabling cluster capabilities that were disabled prior to installation.
See Optional cluster capabilities in OpenShift Container Platform 4.16 for more information about the features provided by each capability.
Monitoring the cluster installation using openshift-install
Use openshift-install to monitor the progress of the single-node cluster installation.
Procedure
Attach the modified RHCOS installation ISO to the target host.
Configure the boot drive order in the server BIOS settings to boot from the attached discovery ISO and then reboot the server.
On the administration host, monitor the installation by running the following command:
$ ./openshift-install --dir=ocp wait-for install-complete
The server restarts several times while deploying the control plane.
Verification
After the installation is complete, check the environment by running the following command:
$ export KUBECONFIG=ocp/auth/kubeconfig
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION
control-plane.example.com Ready master,worker 10m v1.29.4
Additional resources
Creating a bootable ISO image on a USB drive
Booting from an HTTP-hosted ISO image using the Redfish API
Adding worker nodes to single-node OpenShift clusters
Installing single-node OpenShift on cloud providers
Additional requirements for installing single-node OpenShift on a cloud provider
The documentation for installer-provisioned installation on cloud providers is based on a high availability cluster consisting of three control plane nodes. When referring to the documentation, consider the differences between the requirements for a single-node OpenShift cluster and a high availability cluster.
A high availability cluster requires a temporary bootstrap machine, three control plane machines, and at least two compute machines. For a single-node OpenShift cluster, you need only a temporary bootstrap machine and one cloud instance for the control plane node and no worker nodes.
The minimum resource requirements for high availability cluster installation include a control plane node with 4 vCPUs and 100GB of storage. For a single-node OpenShift cluster, you must have a minimum of 8 vCPU cores and 120GB of storage.
The controlPlane.replicas setting in the install-config.yaml file should be set to 1.
The compute.replicas setting in the install-config.yaml file should be set to 0. This makes the control plane node schedulable.
Supported cloud providers for single-node OpenShift
The following table contains a list of supported cloud providers and CPU architectures.
Table 1. Supported cloud providers Cloud provider CPU architecture
Amazon Web Service (AWS)
x86_64 and AArch64
Microsoft Azure
x86_64
Google Cloud Platform (GCP)
x86_64 and AArch64