OKD version 4.7 supports Ignition specification version 3.1. All new machine configs you create going forward should be based on Ignition specification version 3.1. If you are upgrading your OKD cluster, any existing Ignition specification version 2.x machine configs will be translated automatically to specification version 3.1.
Configuring chrony time service
You can set the time server and related settings used by the chrony time service (chronyd)
by modifying the contents of the chrony.conf
file and passing those contents
to your nodes as a machine config.
Procedure
-
Create the contents of the chrony.conf
file and encode it as base64. For example:
$ cat << EOF | base64
pool 0.rhel.pool.ntp.org iburst (1)
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
EOF
1 |
Specify any valid, reachable time source. Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . |
Example output
ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGli
L2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAv
dmFyL2xvZy9jaHJvbnkK
-
Create the MachineConfig
object file, replacing the base64 string with the one you just created yourself.
This example adds the file to master
nodes. You can change it to worker
or make an
additional MachineConfig
object for the worker
role:
$ cat << EOF > ./masters-chrony-configuration.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: masters-chrony-configuration
spec:
config:
ignition:
config: {}
security:
tls: {}
timeouts: {}
version: 3.1.0
networkd: {}
passwd: {}
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGliL2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAvdmFyL2xvZy9jaHJvbnkK
mode: 420
overwrite: true
path: /etc/chrony.conf
osImageURL: ""
EOF
-
Make a backup copy of the configuration file.
-
Apply the configuration in one of two ways:
-
If the cluster is not up yet, generate manifest files, add this file to the openshift
directory, and then continue to create the cluster.
-
If the cluster is already running, apply the file as follows:
$ oc apply -f ./masters-chrony-configuration.yaml
Adding kernel arguments to nodes
In some special cases, you might want to add kernel arguments
to a set of nodes in your cluster.
This should only be done with caution and clear understanding
of the implications of the arguments you set.
|
Improper use of kernel arguments can result in your systems becoming unbootable.
|
Examples of kernel arguments you could set include:
-
selinux=0: Disables Security Enhanced Linux (SELinux).
While not recommended for production, disabling SELinux can
improve performance by 2% - 3%.
-
nosmt: Disables symmetric multithreading (SMT) in the kernel.
Multithreading allows multiple logical threads for each CPU.
You could consider nosmt
in multi-tenant environments to reduce
risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance.
In the following procedure, you create a MachineConfig
object that identifies:
-
A set of machines to which you want to add the kernel argument.
In this case, machines with a worker role.
-
Kernel arguments that are appended to the end of the existing kernel arguments.
-
A label that indicates where in the list of machine configs the change is applied.
Procedure
-
List existing MachineConfig
objects for your OKD cluster to determine how to
label your machine config:
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED
00-master 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 30m
00-worker 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 30m
01-master-container-runtime 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 30m
01-master-kubelet 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 30m
01-worker-container-runtime 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 30m
01-worker-kubelet 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 30m
99-master-1131169f-dae9-11e9-b5dd-12a845e8ffd8-registries 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 30m
99-master-ssh 3.1.0 30m
99-worker-114e8ac7-dae9-11e9-b5dd-12a845e8ffd8-registries 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 30m
99-worker-ssh 3.1.0 30m
rendered-master-b3729e5f6124ca3678188071343115d0 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 30m
rendered-worker-18ff9506c718be1e8bd0a066850065b7 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 30m
-
Create a MachineConfig
object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxoff.yaml
)
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker(1)
name: 05-worker-kernelarg-selinuxoff(2)
spec:
config:
ignition:
version: 3.1.0
kernelArguments:
- selinux=0(3)
1 |
Applies the new kernel argument only to worker nodes. |
2 |
Named to identify where it fits among the machine configs (05) and what it does (adds
a kernel argument to turn off SELinux). |
3 |
Identifies the exact kernel argument as selinux=0 . |
-
Create the new machine config:
$ oc create -f 05-worker-kernelarg-selinuxoff.yaml
-
Check the machine configs to see that the new one was added:
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED
00-master 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 31m
00-worker 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 31m
01-master-container-runtime 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 31m
01-master-kubelet 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 31m
01-worker-container-runtime 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 31m
01-worker-kubelet 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 31m
05-worker-kernelarg-selinuxoff 3.1.0 105s
99-master-1131169f-dae9-11e9-b5dd-12a845e8ffd8-registries 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 31m
99-master-ssh 3.1.0 30m
99-worker-114e8ac7-dae9-11e9-b5dd-12a845e8ffd8-registries 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 31m
99-worker-ssh 3.1.0 31m
rendered-master-b3729e5f6124ca3678188071343115d0 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 31m
rendered-worker-18ff9506c718be1e8bd0a066850065b7 577c2d527b09cd7a481a162c50592139caa15e20 3.1.0 31m
-
Check the nodes:
Example output
NAME STATUS ROLES AGE VERSION
ip-10-0-136-161.ec2.internal Ready worker 28m v1.20.0
ip-10-0-136-243.ec2.internal Ready master 34m v1.20.0
ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.20.0
ip-10-0-142-249.ec2.internal Ready master 34m v1.20.0
ip-10-0-153-11.ec2.internal Ready worker 28m v1.20.0
ip-10-0-153-150.ec2.internal Ready master 34m v1.20.0
You can see that scheduling on each worker node is disabled as the change is being applied.
-
Check that the kernel argument worked by going to one of the worker nodes and listing
the kernel command line arguments (in /proc/cmdline
on the host):
$ oc debug node/ip-10-0-141-105.ec2.internal
Example output
Starting pod/ip-10-0-141-105ec2internal-debug ...
To use host binaries, run `chroot /host`
sh-4.2# cat /host/proc/cmdline
BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8
rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16...
coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 selinux=0
sh-4.2# exit
You should see the selinux=0
argument added to the other kernel arguments.
Adding a real-time kernel to nodes
Some OKD workloads require a high degree of determinism.While Linux is not a real-time operating system, the Linux real-time
kernel includes a preemptive scheduler that provides the operating system with real-time characteristics.
If your OKD workloads require these real-time characteristics, you can switch your machines to the Linux real-time kernel. For OKD, 4 you can make this switch using a MachineConfig
object. Although making the change is as simple as changing a machine config kernelType
setting to realtime
, there are a few other considerations before making the change:
-
Currently, real-time kernel is supported only on worker nodes, and only for radio access network (RAN) use.
-
The following procedure is fully supported with bare metal installations that use systems that are certified for Red Hat Enterprise Linux for Real Time 8.
-
Real-time support in OKD is limited to specific subscriptions.
-
The following procedure is also supported for use with Google Cloud Platform.
Procedure
-
Create a machine config for the real-time kernel: Create a YAML file (for example, 99-worker-realtime.yaml
) that contains a MachineConfig
object for the realtime
kernel type. This example tells the cluster to use a real-time kernel for all worker nodes:
$ cat << EOF > 99-worker-realtime.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: "worker"
name: 99-worker-realtime
spec:
kernelType: realtime
EOF
-
Add the machine config to the cluster. Type the following to add the machine config to the cluster:
$ oc create -f 99-worker-realtime.yaml
-
Check the real-time kernel: Once each impacted node reboots, log in to the cluster and run the following commands to make sure that the real-time kernel has replaced the regular kernel for the set of nodes you configured:
Example output
NAME STATUS ROLES AGE VERSION
ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.20.0
ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.20.0
ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.20.0
$ oc debug node/ip-10-0-143-147.us-east-2.compute.internal
Example output
Starting pod/ip-10-0-143-147us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
sh-4.4# uname -a
Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT
Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
The kernel name contains rt
and text “PREEMPT RT” indicates that this is a real-time kernel.
-
To go back to the regular kernel, delete the MachineConfig
object:
$ oc delete -f 99-worker-realtime.yaml
Configuring journald settings
If you need to configure settings for the journald
service on OKD nodes, you can do that by modifying the appropriate configuration file and passing the file to the appropriate pool of nodes as a machine config.
This procedure describes how to modify journald
rate limiting settings in the /etc/systemd/journald.conf
file and apply them to worker nodes. See the journald.conf
man page for information on how to use that file.
Procedure
-
Create the contents of the /etc/systemd/journald.conf
file and encode it as base64. For example:
$ cat > /tmp/jrnl.conf <<EOF
# Disable rate limiting
RateLimitInterval=1s
RateLimitBurst=10000
Storage=volatile
Compress=no
MaxRetentionSec=30s
EOF
-
Convert the temporary journal.conf
file to base64 and save it into a variable (jrnl_cnf
):
$ export jrnl_cnf=$( cat /tmp/jrnl.conf | base64 -w0 )
$ echo $jrnl_cnf
IyBEaXNhYmxlIHJhdGUgbGltaXRpbmcKUmF0ZUxpbWl0SW50ZXJ2YWw9MXMKUmF0ZUxpbWl0QnVyc3Q9MTAwMDAKU3RvcmFnZT12b2xhdGlsZQpDb21wcmVzcz1ubwpNYXhSZXRlbnRpb25TZWM9MzBzCg==
-
Create the machine config, including the encoded contents of journald.conf
(jrnl_cnf
variable):
$ cat > /tmp/40-worker-custom-journald.yaml <<EOF
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 40-worker-custom-journald
spec:
config:
ignition:
config: {}
security:
tls: {}
timeouts: {}
version: 3.1.0
networkd: {}
passwd: {}
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,${jrnl_cnf}
verification: {}
filesystem: root
mode: 420
path: /etc/systemd/journald.conf
systemd: {}
osImageURL: ""
EOF
-
Apply the machine config to the pool:
$ oc apply -f /tmp/40-worker-custom-journald.yaml
-
Check that the new machine config is applied and that the nodes are not in a degraded state. It might take a few minutes. The worker pool will show the updates in progress, as each node successfully has the new machine config applied:
$ oc get machineconfigpool
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-35 True False False 3 3 3 0 34m
worker rendered-worker-d8 False True False 3 1 1 0 34m
-
To check that the change was applied, you can log in to a worker node:
$ oc get node | grep worker
ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+$Format:%h$
$ oc debug node/ip-10-0-0-1.us-east-2.compute.internal
Starting pod/ip-10-0-141-142us-east-2computeinternal-debug ...
...
sh-4.2# chroot /host
sh-4.4# cat /etc/systemd/journald.conf
# Disable rate limiting
RateLimitInterval=1s
RateLimitBurst=10000
Storage=volatile
Compress=no
MaxRetentionSec=30s
sh-4.4# exit
Configuring container image registry settings
Settings that define the registries that OKD uses to get container images are held in the /etc/containers/registries.conf
file by default. In that file, you can set registries to not require authentication (insecure), point to mirrored registries, or set which registries are searched for unqualified container image requests.
Rather than change registries.conf
directly, you can drop configuration files into the /etc/containers/registries.d
directory that are then automatically appended to the system’s existing registries.conf
settings.
This procedure describes how to create a registries.d
file (/etc/containers/registries.s/99-worker-unqualified-search-registries.conf
) that adds quay.io
as an unqualified search registry (one that OKD can search when it tries to pull an image name that does not include the registry name). It includes base64-encoded content that you can examine as follows:
$ echo dW5xdWFsaWZpZWQtc2VhcmNoLXJlZ2lzdHJpZXMgPSBbJ3JlZ2lzdHJ5LmFjY2Vzcy5yZWRoYXQuY29tJywgJ2RvY2tlci5pbycsICdxdWF5LmlvJ10K | base64 -d
unqualified-search-registries = ['registry.access.redhat.com', 'docker.io', 'quay.io']
See the containers-registries.conf
man page for the format for the registries.conf
and registries.d
directory files.
Procedure
-
Create a YAML file (myregistry.yaml
) to hold the contents of the /etc/containers/registries.d/99-worker-unqualified-search-registries.conf
file, including the encoded base64 contents for that file. For example:
$ cat > /tmp/myregistry.yaml <<EOF
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-unqualified-search-registries
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,dW5xdWFsaWZpZWQtc2VhcmNoLXJlZ2lzdHJpZXMgPSBbJ3JlZ2lzdHJ5LmFjY2Vzcy5yZWRoYXQuY29tJywgJ2RvY2tlci5pbycsICdxdWF5LmlvJ10K
filesystem: root
mode: 0420
path: /etc/containers/registries.d/99-worker-unqualified-search-registries.conf
EOF
-
Apply the machine config to the pool:
$ oc apply -f /tmp/myregistry.yaml
-
Check that the new machine config has been applied and that the nodes are not in a degraded state. It might take a few minutes. The worker pool will show the updates in progress, as each machine successfully has the new machine config applied:
$ oc get machineconfigpool
Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-35 True False False 3 3 3 0 34m
worker rendered-worker-d8 False True False 3 1 1 0 34m
-
To check that the change was applied, you can log in to a worker node:
$ oc get node | grep worker
Example output
ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+$Format:%h$
$ oc debug node/ip-10-0-0-1.us-east-2.compute.internal
Example output
Starting pod/ip-10-0-141-142us-east-2computeinternal-debug ...
...
sh-4.2# chroot /host
sh-4.4# cat /etc/containers/registries.d/99-worker-unqualified-search-registries.conf
unqualified-search-registries = ['registry.access.redhat.com', 'docker.io', 'quay.io']
sh-4.4# exit
Adding extensions to FCOS
FCOS is a minimal container-oriented RHEL operating system, designed to provide a common set of capabilities to OKD clusters across all platforms. While adding software packages to FCOS systems is generally discouraged, the MCO provides an extensions
feature you can use to add a minimal set of features to FCOS nodes.
Currently, the following extension is available:
The following procedure describes how to use a machine config to add one or more extensions to your FCOS nodes.
Procedure
-
Create a machine config for extensions: Create a YAML file (for example, 80-extensions.yaml
) that contains a MachineConfig
extensions
object. This example tells the cluster to add the usbguard
extension.
$ cat << EOF > 80-extensions.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 80-worker-extensions
spec:
config:
ignition:
version: 3.1.0
extensions:
- usbguard
EOF
-
Add the machine config to the cluster. Type the following to add the machine config to the cluster:
$ oc create -f 80-extensions.yaml
This sets all worker nodes to have rpm packages for usbguard
installed.
-
Check that the extensions were applied:
$ oc get machineconfig 80-worker-extensions
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
80-worker-extensions 3.1.0 57s
-
Check that the new machine config is now applied and that the nodes are not in a degraded state. It may take a few minutes. The worker pool will show the updates in progress, as each machine successfully has the new machine config applied:
$ oc get machineconfigpool
Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-35 True False False 3 3 3 0 34m
worker rendered-worker-d8 False True False 3 1 1 0 34m
-
Check the extensions. To check that the extension was applied, run:
$ oc get node | grep worker
Example output
NAME STATUS ROLES AGE VERSION
ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.18.3
$ oc debug node/ip-10-0-169-2.us-east-2.compute.internal
Example output
...
To use host binaries, run `chroot /host`
sh-4.4# chroot /host
sh-4.4# rpm -q usbguard
usbguard-0.7.4-4.el8.x86_64.rpm
Use the "Configuring crony time service" section as a model for how to go about adding other configuration files to OKD nodes.