Node Base Images are OCI container images that are used to deliver software updates and customizations to rpm-ostree based Linux systems running on bare metal and virtual machines, such as Fedora CoreOS, Fedora IoT, Fedora Silverblue, CentOS Stream CoreOS, RHEL CoreOS, and RHEL for Edge. Node Base Images are built with OSTree Native Containers.
While application container images are deployed on a container host (e.g. on a kubernetes or OpenShift node) and executed by a container runtime (such as podman, docker, cri-o), those Node Base Images are intended to update the software and configuration of the container host operating system. Thus, Node Base Images are an alternative way to deliver software and configuration updates.
What is so exciting about Node Base Images?
We can now build and distribute custom OS images for bare metal or VMs with standard container tooling, or use existing container image build services in the cloud or on-premise that you are already using to build your application containers. No need to learn how to manage a buildsystem for rpm-ostree, how to generate OSTree commits, or how to setup, manage and use the RHEL for Edge image builder. Simply use the well known container tools.
Thus we can now use the same toolchains and services to build images for
- Application containers
that run on a container host via Podman, Kubernetes or OpenShift, - Container hosts
that run the application containers via Podman, Kubernetes or OpenShift.
To demonstrate and discuss this technology, in this blog article we use Fedora CoreOS. Fedora is the community upstream project of RHEL. What we will do:
- First we will install “Fedora CoreOS Stable” into a LibVirt VM.
(You need a Linux workstation or notebook with LibVirt configured to follow the instructions. If you use MacOS or Windows, or another virtualization technology, like VMware or Virtualbox, then you have to adapt the instructions accordingly.) - Next we will rebase the installed system to a Node Base Image (same “stable” stream) which we pull from the quay.io image registry.
- Then we will use the container image build service of quay.io to build a modified Node Base Image. As base image we use “Fedora CoreOS Testing” instead of “Fedora CoreOS Stable” and we will add the popular “nmon” utility to the custom Node Base Image. The build instructions are provided via an OCI Containerfile (what we called a Dockerfile in the early Linux container times.).
- Finally we will rebase the system to the custom Node Base Image that we have just built, thus switching from “Fedora CoreOS Stable” to “Fedora CoreOS Testing” with the “nmon” utility added.
Let’s start!
1. Install Fedora CoreOS Stable into a VM
Fedora CoreOS needs an Ignition file for the installation. We first create a Butane file (YAML) that will then be converted to an Ignition file (JSON) with the “butane” tool. (YAML is better human readable than JSON.) You should add your own SSH public key here to be able to login via SSH to the system after installation:
awarda@fedora:$ cat fcos.bu variant: fcos version: 1.4.0 storage: files: - path: /etc/hostname mode: 0644 contents: inline: fcos passwd: users: - name: core ssh_authorized_keys: - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCidlbO+EzrB0jI9A5nLBe9LGVNMqn6q+mMobaZG+kp+WlYMUadRmd/VkMnBDJEKZH9DZvshLcIGPDvmGZQJCRIAI9pyqDps4mhVmv+ewJdWO9ron3og0Q/UrB35q9Um8H1obDEVxrOY+Nd6ZGtlYC0gKHHyu/+K4JFN8W8Ur/GY9ANddU5iQfySbbRQfo0HsFMfsHtQxX8m4VczDwPgcKWxN+pt0Orx8H12HV5ZRmPgCH4bSI7e93AxoIHsiiStzWFEE*this*key*is*invalid*UoVx5myuuVZCidqq1XXIaPFgBGQ+lEuOjKIhs0gpfRr/Um/GDVYt5t7hbLLwN7nnUG+0J97xVrdqnZj0NnnKMa+aCD+Iksi3b+ygEH4/s+Bm6tWMNQlJC9TQ4KQ1zmo2g+ND9p7qNYVvLSMlExXTNcS5YPbTh8589lgNZpT11afzlIX4jmFJK/2QSVh60G+/akttF4rfkOo/mjcUnEuiMVqj26q3sdYRMrHs== awarda@fedora
If not already available in your workstation, you now have to install the “butane” utility. You should also install the JSON parser “jq” for pretty-printing of the resulting Ignition file in JSON format:
awarda@fedora:$ sudo dnf install -y butane jq Last metadata expiration check: 5:43:42 ago on Tue 16 May 2023 04:36:03 PM CEST. Dependencies resolved. ================================================================== Package Arch Version Repo Size ================================================================== Installing: butane x86_64 0.18.0-1.fc38 fedora 2.2 M jq x86_64 1.6-15.fc38 fedora 188 k Installing dependencies: oniguruma x86_64 6.9.8-2.D20220919gitb041f6d.fc38.1 fedora 218 k Transaction Summary ================================================================== Install 3 Packages Total download size: 2.6 M Installed size: 8.8 M Downloading Packages: (1/3): jq-1.6-15.fc38.x86_64.rpm 327 kB/s | 188 kB 00:00 (2/3): oniguruma-6.9.8-2.D2022091 342 kB/s | 218 kB 00:00 (3/3): butane-0.18.0-1.fc38.x86_6 944 kB/s | 2.2 MB 00:02 ------------------------------------------------------------------ Total 886 kB/s | 2.6 MB 00:03 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : oniguruma-6.9.8-2.D20220919gitb041f6d. 1/3 Installing : jq-1.6-15.fc38.x86_64 2/3 Installing : butane-0.18.0-1.fc38.x86_64 3/3 Running scriptlet: butane-0.18.0-1.fc38.x86_64 3/3 Verifying : butane-0.18.0-1.fc38.x86_64 1/3 Verifying : jq-1.6-15.fc38.x86_64 2/3 Verifying : oniguruma-6.9.8-2.D20220919gitb041f6d. 3/3 Installed: butane-0.18.0-1.fc38.x86_64 jq-1.6-15.fc38.x86_64 oniguruma-6.9.8-2.D20220919gitb041f6d.fc38.1.x86_64 Complete!
Generate the Ignition file from the Butane file:
awarda@fedora:$ butane fcos.bu | jq > fcos.ign
Have a look at the Ignition file that was generated from the Butane file:
awarda@fedora:$ cat fcos.ign { "ignition": { "version": "3.3.0" }, "passwd": { "users": [ { "name": "core", "sshAuthorizedKeys": [ "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCidlbO+EzrB0jI9A5nLBe9LGVNMqn6q+mMobaZG+kp+WlYMUadRmd/VkMnBDJEKZH9DZvshLcIGPDvmGZQJCRIAI9pyqDps4mhVmv+ewJdWO9ron3og0Q/UrB35q9Um8H1obDEVxrOY+Nd6ZGtlYC0gKHHyu/+K4JFN8W8Ur/GY9ANddU5iQfySbbRQfo0HsFMfsHtQxX8m4VczDwPgcKWxN+pt0Orx8H12HV5ZRmPgCH4bSI7e93AxoIHsiiStzWFEE*this*key*is*invalid*UoVx5myuuVZCidqq1XXIaPFgBGQ+lEuOjKIhs0gpfRr/Um/GDVYt5t7hbLLwN7nnUG+0J97xVrdqnZj0NnnKMa+aCD+Iksi3b+ygEH4/s+Bm6tWMNQlJC9TQ4KQ1zmo2g+ND9p7qNYVvLSMlExXTNcS5YPbTh8589lgNZpT11afzlIX4jmFJK/2QSVh60G+/akttF4rfkOo/mjcUnEuiMVqj26q3sdYRMrHs== awarda@fedora" ] } ] }, "storage": { "files": [ { "path": "/etc/hostname", "contents": { "compression": "", "source": "data:,fcos" }, "mode": 420 } ] } }
Next download and decompress the Fedora CoreOS Stable QEMU image:
awarda@fedora:$ wget https: //builds.coreos.fedoraproject.org/prod/streams/stable/builds/38.20230414.3.0/x86_64/fedora-coreos-38.20230414.3.0-qemu.x86_64.qcow2.xz awarda@fedora:$ xz -d fedora-coreos-38.20230414.3.0-qemu.x86_64.qcow2.xz
Setup the correct SELinux label for LibVirt and install the VM:
awarda@fedora:$ chcon --verbose --type svirt_home_t fcos.ign changing security context of 'fcos.ign' awarda@fedora:$ virt-install --connect="qemu:///system" --name=fcos --vcpus=2 --memory=2048 --os-variant=fedora-coreos-stable --import --graphics=none --disk="size=10,backing_store=$(pwd)/fedora-coreos-38.20230414.3.0-qemu.x86_64.qcow2" --network bridge=virbr0 --qemu-commandline="-fw_cfg name=opt/com.coreos/config,file=$(pwd)/fcos.ign" Booting `Fedora CoreOS 38.20230414.3.0 (ostree:0)' [ 0.000000] Linux version 6.2.9-300.fc38.x86_64 (mockbuild@38f30b3c0c69453fae61718fc43f33bc) (gcc (GCC) 13.0.1 20230318 (Red Hat 13.0.1-0), GNU ld version 2.39-9.fc38) #1 SMP PREEMPT_DYNAMIC Thu Mar 30 22:32:58 UTC 2023 [ 0.000000] Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/fedora-coreos-d2605fec211df3ea8a588c4fbc6c881a8a1ea85f33b2c52eb62e469ff6a25622/vmlinuz-6.2.9-300.fc38.x86_64 mitigations=auto,nosmt ignition.platform.id=qemu console=tty0 console=ttyS0,115200n8 ignition.firstboot ostree=/ostree/boot.1/fedora-coreos/d2605fec211df3ea8a588c4fbc6c881a8a1ea85f33b2c52eb62e469ff6a25622/0 ... [ 1.233082] Run /init as init process [ 1.249351] systemd[1]: systemd 253.2-1.fc38 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [ 1.251068] systemd[1]: Detected virtualization kvm. [ 1.251391] systemd[1]: Detected architecture x86-64. [ 1.251811] systemd[1]: Running in initrd. Welcome to Fedora CoreOS 38.20230414.3.0 dracut-059-2.fc38 (Initramfs)! [...] Fedora CoreOS 38.20230414.3.0 Kernel 6.2.9-300.fc38.x86_64 on an x86_64 (ttyS0) SSH host key: SHA256:uUfHiH9pNIgNk8BqSee19d2SczGL9gJKnceEshkkFVA (ECDSA) SSH host key: SHA256:Y5el9zmddydQYtIpbyE7lefqsXMuUpSRpKwFSoDGh/c (ED25519) SSH host key: SHA256:aK7YE4sAgVP/411V7tlUo1tCy1+Mz5PDsFMe8RHbvoU (RSA) Ignition: ran on 2023/05/16 20:29:54 UTC (this boot) Ignition: user-provided config was applied Ignition: wrote ssh authorized keys file for user: core fcos login: Fedora CoreOS 38.20230414.3.0 Kernel 6.2.9-300.fc38.x86_64 on an x86_64 (ttyS0) SSH host key: SHA256:uUfHiH9pNIgNk8BqSee19d2SczGL9gJKnceEshkkFVA (ECDSA) SSH host key: SHA256:Y5el9zmddydQYtIpbyE7lefqsXMuUpSRpKwFSoDGh/c (ED25519) SSH host key: SHA256:aK7YE4sAgVP/411V7tlUo1tCy1+Mz5PDsFMe8RHbvoU (RSA) enp1s0: 192.168.122.67 fe80::f014:b56a:71fd:8d0a Ignition: ran on 2023/05/16 20:29:54 UTC (this boot) Ignition: user-provided config was applied Ignition: wrote ssh authorized keys file for user: core fcos login:
Watch the console output, note the IP address and then SSH into the VM. (You cannot login to the console because no password is set for user “core”.) Look around. Stop and disable the automatic update service:
awarda@fedora:$ ssh [email protected] The authenticity of host '192.168.122.67 (192.168.122.67)' can't be established. ED25519 key fingerprint is SHA256:Y5el9zmddydQYtIpbyE7lefqsXMuUpSRpKwFSoDGh/c. This key is not known by any other names Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '192.168.122.67' (ED25519) to the list of known hosts. Fedora CoreOS 38.20230414.3.0 Tracker: https://github.com/coreos/fedora-coreos-tracker Discuss: https://discussion.fedoraproject.org/tag/coreos [core@fcos ~]$ rpm-ostree status State: idle AutomaticUpdatesDriver: Zincati DriverState: active; periodically polling for updates (last checked Tue 2023-05-16 20:35:03 UTC) Deployments: ● fedora:fedora/x86_64/coreos/stable Version: 38.20230414.3.0 (2023-05-01T21:23:54Z) Commit: 8453aa86d93a2eefc9ef948ad1c8d40f8603f4d37f202e4624df2a9fb2beb12b GPGSignature: Valid signature by 6A51BBABBA3D5467B6171221809A8D7CEB10B464 [core@fcos ~]$ sudo systemctl disable zincati.service Removed "/etc/systemd/system/multi-user.target.wants/zincati.service". [core@fcos ~]$ sudo systemctl stop zincati.service
Note the name of the deployment. The bulletpoint denotes that this is the active deployment. It is currently the only deployment.
2. Rebase to the node base image
Now rebase the system to the Node Base Image:
[core@fcos ~]$ sudo rpm-ostree rebase ostree-unverified-registry:quay.io/fedora/fedora-coreos:stable Pulling manifest: ostree-unverified-image:docker://quay.io/fedora/fedora-coreos:stable Importing: ostree-unverified-image:docker://quay.io/fedora/fedora-coreos:stable (digest: sha256:4bfef20cdaede36773b8bcc24b560165e04377c64e1ba1ec81179b8b5a30693e) ostree chunk layers stored: 23 needed: 28 (525.0 MB) Fetching ostree chunk sha256:097d48f2445b (87.2 MB) Fetched ostree chunk sha256:097d48f2445b Fetching ostree chunk sha256:2dd501245ef9 (78.0 MB) Fetched ostree chunk sha256:2dd501245ef9 [...] Fetching ostree chunk sha256:e5cc696ba849 (1.4 MB) Fetched ostree chunk sha256:e5cc696ba849 Staging deployment... done Freed: 1.9 MB (pkgcache branches: 0) Upgraded: NetworkManager 1:1.42.0-1.fc38 -> 1:1.42.6-1.fc38 NetworkManager-cloud-setup 1:1.42.0-1.fc38 -> 1:1.42.6-1.fc38 NetworkManager-libnm 1:1.42.0-1.fc38 -> 1:1.42.6-1.fc38 [...] vim-minimal 2:9.0.1429-1.fc38 -> 2:9.0.1486-1.fc38 zchunk-libs 1.3.0-1.fc38 -> 1.3.1-1.fc38 zstd 1.5.4-1.fc38 -> 1.5.5-1.fc38 Added: atheros-firmware-20230404-149.fc38.noarch brcmfmac-firmware-20230404-149.fc38.noarch mt7xxx-firmware-20230404-149.fc38.noarch realtek-firmware-20230404-149.fc38.noarch Changes queued for next boot. Run "systemctl reboot" to start a reboot
You now have two deployments. Note the different name of the new deployment, and the bullet point in front of the active deployment:
[core@fcos ~]$ rpm-ostree status State: idle AutomaticUpdatesDriver: Zincati DriverState: inactive Deployments: ostree-unverified-registry:quay.io/fedora/fedora-coreos:stable Digest: sha256:4bfef20cdaede36773b8bcc24b560165e04377c64e1ba1ec81179b8b5a30693e Version: 38.20230430.3.1 (2023-05-16T20:50:25Z) Diff: 78 upgraded, 4 added ● fedora:fedora/x86_64/coreos/stable Version: 38.20230414.3.0 (2023-05-01T21:23:54Z) Commit: 8453aa86d93a2eefc9ef948ad1c8d40f8603f4d37f202e4624df2a9fb2beb12b GPGSignature: Valid signature by 6A51BBABBA3D5467B6171221809A8D7CEB10B464
The “ostree-unverified-registy” signals to us, that the new deployment is not secured by a GPGSignature. We can ignore this for the purpose of the demonstration of the fundamental image layering concepts and mechanisms.
When you are done, reboot the VM, wait a minute, and SSH again into the system:
[core@fcos ~]$ sudo systemctl reboot Connection to 192.168.122.67 closed by remote host. Connection to 192.168.122.67 closed. awarda@fedora:$ sleep 60; ssh [email protected] Fedora CoreOS 38.20230430.3.1 Tracker: https://github.com/coreos/fedora-coreos-tracker Discuss: https://discussion.fedoraproject.org/tag/coreos Last login: Tue May 16 20:37:10 2023 from 192.168.122.1 [core@fcos ~]$ rpm-ostree status State: idle Deployments: ● ostree-unverified-registry:quay.io/fedora/fedora-coreos:stable Digest: sha256:4bfef20cdaede36773b8bcc24b560165e04377c64e1ba1ec81179b8b5a30693e Version: 38.20230430.3.1 (2023-05-16T20:50:25Z) fedora:fedora/x86_64/coreos/stable Version: 38.20230414.3.0 (2023-05-01T21:23:54Z) Commit: 8453aa86d93a2eefc9ef948ad1c8d40f8603f4d37f202e4624df2a9fb2beb12b GPGSignature: Valid signature by 6A51BBABBA3D5467B6171221809A8D7CEB10B464
Note that the bulletpoint denoting the active deployment has moved. (You could still rollback to the previous deployment.)
Btw., if you need to list the VM, or have shutdown the VM and want to restart it:
awarda@fedora:$ virsh --connect="qemu:///system" list --all awarda@fedora:$ virsh --connect="qemu:///system" start fcos --console
3. Build a custom Node Base Image
You can use any container toolset you prefer (e.g. podman, buildah, skopeo, docker) to build a custom Node Base Image and push it to an image registry. We will use Red Hat’s quay.io, which is a cloud-based enterprise image registry that includes an image build service.
Go to https://quay.io, register or sign-in and create a repository:
My repository is at https://quay.io/repository/awarda/fedora-coreos-with-nmon
Select the repository, then go to the settings (https://quay.io/repository/awarda/fedora-coreos-with-nmon?tab=settings) and change your repository’s visibility to “public”.
Next click “Builds” in the left pane or click “View Build History” (https://quay.io/repository/awarda/fedora-coreos-with-nmon?tab=builds):
Your Build History will be empty, while mine shows three previous builds:
Now press the button “Start New Build”.
Your Containerfile (aka Dockerfile) that you prepared on your workstation should look like this:
awarda@fedora:$ cat Containerfile FROM quay.io/fedora/fedora-coreos:testing-devel RUN rpm-ostree install nmon && ostree container commit
Note that we originally installed Fedora CoreOS Stable to the VM, and now we build a custom Node Base Image that is derived from Fedora CoreOS Testing.
Next press the button “Select file” to select the local Containerfile from your workstation:
The response should be “Dockerfile found and valid”. Then press the button “Start Build”. You will be returned to the Build History and will notice that a new build has been initiated. You can click on that build to observe the build process:
You can click on the individual steps to examine the details.
Before deploying the custom Node Base Image to the VM with the Fedora CoreOS system, you can use a container runtime, e.g. podman (or docker), to execute it on your local workstation to test and inspect it:
awarda@fedora:$ podman run -ti quay.io/awarda/fedora-coreos-with-nmon:latest Trying to pull quay.io/awarda/fedora-coreos-with-nmon:latest... Getting image source signatures Copying blob 1d1851157929 skipped: already exists Copying blob 42ddc6f713fa done Copying blob 7be46bc79996 done Copying blob b9128c53e121 done Copying blob 2d72193934f5 skipped: already exists Copying blob 097d48f2445b skipped: already exists Copying blob 763d9b104c27 skipped: already exists [...] Copying blob 5925f4d658b7 skipped: already exists Copying blob 89dda9eb9a1f skipped: already exists Copying blob 910ff6f93303 skipped: already exists Copying blob 89aafe43ec55 done Copying blob f08920a9d385 done Copying config c8d1447b0b done Writing manifest to image destination Storing signatures bash-5.2# cat /etc/redhat-release Fedora release 38 (Thirty Eight) bash-5.2# which nmon /usr/bin/nmon
4. Rebase to the custom Node Base Image
Enter the VM again via SSH and rebase the Fedora CoreOS system from the custom Node Base Image we have just built:
[core@fcos ~]$ sudo rpm-ostree rebase ostree-unverified-registry:quay.io/awarda/fedora-coreos-with-nmon:latest Pulling manifest: ostree-unverified-image:docker://quay.io/awarda/fedora-coreos-with-nmon:latest Importing: ostree-unverified-image:docker://quay.io/awarda/fedora-coreos-with-nmon:latest (digest: sha256:03e77dfc597e36de653c04e1476e572976bb3371b0188ac655bf3813f2e3985c) ostree chunk layers stored: 35 needed: 16 (331.2 MB) custom layers stored: 0 needed: 1 (8.6 MB) Fetching ostree chunk sha256:b9128c53e121 (78.0 MB) Fetched ostree chunk sha256:b9128c53e121 [...] Fetching layer sha256:f08920a9d385 (8.6 MB) Fetched layer sha256:f08920a9d385 Staging deployment... done Upgraded: alternatives 1.22-1.fc38 -> 1.24-1.fc38 audit-libs 3.1-2.fc38 -> 3.1.1-1.fc38 container-selinux 2:2.209.0-1.fc38 -> 2:2.211.1-1.fc38 [...] systemd-pam 253.2-1.fc38 -> 253.4-1.fc38 systemd-resolved 253.2-1.fc38 -> 253.4-1.fc38 systemd-udev 253.2-1.fc38 -> 253.4-1.fc38 Added: nmon-16n-5.fc38.x86_64 passt-0^20230509.g96f8d55-1.fc38.x86_64 passt-selinux-0^20230509.g96f8d55-1.fc38.noarch Changes queued for next boot. Run "systemctl reboot" to start a reboot
By default, when rebooting into a new deployment, only the new and the previous deployment (= the rollback deployment) will be kept, and the other deployments will be deleted. If you want to prevent deletion of a deployment, you can “pin” it. We want to pin all our deployments:
[core@fcos ~]$ rpm-ostree status State: idle Deployments: ostree-unverified-registry:quay.io/awarda/fedora-coreos-with-nmon:latest Digest: sha256:03e77dfc597e36de653c04e1476e572976bb3371b0188ac655bf3813f2e3985c Version: 38.20230514.20.1 (2023-05-16T21:34:39Z) Diff: 37 upgraded, 3 added ● ostree-unverified-registry:quay.io/fedora/fedora-coreos:stable Digest: sha256:4bfef20cdaede36773b8bcc24b560165e04377c64e1ba1ec81179b8b5a30693e Version: 38.20230430.3.1 (2023-05-16T20:50:25Z) fedora:fedora/x86_64/coreos/stable Version: 38.20230414.3.0 (2023-05-01T21:23:54Z) Commit: 8453aa86d93a2eefc9ef948ad1c8d40f8603f4d37f202e4624df2a9fb2beb12b GPGSignature: Valid signature by 6A51BBABBA3D5467B6171221809A8D7CEB10B464 [core@fcos ~]$ sudo ostree admin pin 0 error: Cannot pin staged deployment [core@fcos ~]$ sudo ostree admin pin 1 Deployment 1 is now pinned [core@fcos ~]$ sudo ostree admin pin 2 Deployment 2 is now pinned
Reboot, wait a minute, then enter the VM again via SSH and check the result:
[core@fcos ~]$ sudo systemctl reboot Connection to 192.168.122.67 closed by remote host. Connection to 192.168.122.67 closed. awarda@fedora:$ sleep 60; ssh [email protected] Fedora CoreOS 38.20230514.20.1 Tracker: https://github.com/coreos/fedora-coreos-tracker Discuss: https://discussion.fedoraproject.org/tag/coreos Last login: Tue May 16 21:28:48 2023 from 192.168.122.1 [core@fcos ~]$ rpm-ostree status State: idle Deployments: ● ostree-unverified-registry:quay.io/awarda/fedora-coreos-with-nmon:latest Digest: sha256:03e77dfc597e36de653c04e1476e572976bb3371b0188ac655bf3813f2e3985c Version: 38.20230514.20.1 (2023-05-16T21:34:39Z) ostree-unverified-registry:quay.io/fedora/fedora-coreos:stable Digest: sha256:4bfef20cdaede36773b8bcc24b560165e04377c64e1ba1ec81179b8b5a30693e Version: 38.20230430.3.1 (2023-05-16T20:50:25Z) Pinned: yes fedora:fedora/x86_64/coreos/stable Version: 38.20230414.3.0 (2023-05-01T21:23:54Z) Commit: 8453aa86d93a2eefc9ef948ad1c8d40f8603f4d37f202e4624df2a9fb2beb12b GPGSignature: Valid signature by 6A51BBABBA3D5467B6171221809A8D7CEB10B464 Pinned: yes
Finally, to check that the customization that we performed on the Node Base Image really made it to the updated system, execute the “nmon” monitoring utility!
You can find more image layering examples here:
RHEL CoreOS: https://github.com/openshift/rhcos-image-layering-examples
Fedora CoreOS: https://github.com/coreos/layering-examples
More info about Node Base Images can be found in the documentation of the upstream “OStree native containers” project: https://coreos.github.io/rpm-ostree/container/
5. Summary and what’s coming next
Fedora CoreOS, Fedora IoT, Fedora Silverblue, CentOS Stream CoreOS, RHEL CoreOS and RHEL for Edge (and other OSTree-based Linux systems such as Debian/Endless OS) operating system images for bare metal and VMs can now be customized using standard OCI-container build tools. No need for Linux admins to learn just another toolset to create customized rpm-ostree deployments. Ten years after Docker’s debut, today almost every application developer and Linux admin knows how to create application container images using a Containerfile (or Dockerfile) and tools like docker, podman, buildah, etc., and now we can reuse these skills for OS image customization! This drastically lowers the barrier for customization of RHEL and Fedora operating system images.
Besides applying a Node Base Image to a rpm-ostree based system to update the system, we can also execute a Node Base Image on a container runtime (such as podman, docker), for example to perform inspections and tests. This makes inclusion into automated build pipelines with quality-checks and test-automation very straight-forward.
How is this technology used in Red Hat products?
Red Hat OpenShift ships with RHEL CoreOS bootable Node Base Images which you can customize with any OCI-container tooling (e.g. Podman, Docker) before using it with your bare metal or virtual machines OpenShift master and worker nodes.
- Support for adding RHEL hotfix packages to RHEL CoreOS is GA in OpenShift 4.12!
- Developer Preview in OpenShift 4.12: anything you want to try! Pre-install additional software, copy-in configuration files directly, even run Ansible playbooks against the image pre-deployment!
- You define the image, the OpenShift Machine Config Operator rolls it out for you!
The just released OpenShift 4.13 version is extending support for custom Node Base Images even further.
But this is not the end of an exciting innovation:
When reflecting on the above example, note that we started with a conventional, butane/ignition-file driven installation of an rpm-ostree deployment to the VM. We then switched to a Node Base Image after the first boot and applied a custom Node Base Image. Currently it is not possible to directly install a Node Base Image to a blank VM or bare metal server, we always have to start with a conventional installation that we then switch to a Node Base Image.
Application container images do not have to include a Linux kernel (vmlinuz and kernel modules), because all containers running on the same host are sharing the Linux kernel with the host operating system. (The containers are isolated from each other and from the host by Linux kernel namespaces, cgroups, seccomp and SELinux.) Application containers also need no bootloader and boot configuration (such as grub), because they are executed by a container runtime (such as podman, docker, cri-o).
Node Base Images already contain a Linux kernel in /lib/modules/ which is copied over to /boot/ostree/<hostname>-<deployment>/ when the system is updated from the Node Base Image. The bootloader configuration is also updated in /boot/loader/ by the Node Base Image. (The included kernel and bootloader configuration are ignored and skipped when a Node Base Image is executed by a container runtime, such as podman or docker.)
But although the Node Base Image contains a kernel and bootloader configuration, we cannot yet directly boot a Node Base Image on a blank VM or bare metal server. The upstream community project “bootc” will enable us to directly boot and install a Node Base Image on a blank VM or bare metal, to install the bootloader and create the initial boot configuration, thus skipping the conventional installation of a rpm-ostree deployment. The system will then also be able to update the bootloader configuration.
But bootc is still experimental: at the current time, bootc is in active development and is not quite considered ready for production use, while the demonstrated layering of a customized Node Base Images on a conventionally installed RHEL CoreOS worker or master node of an OpenShift cluster is already GA and fully supported by Red Hat in OpenShift 4.12 and later. More info on bootc can be found here: https://github.com/containers/bootc
3 replies on “Using OSTree Native Containers as Node Base Images”
[…] Using OSTree Native Containers as Node Base Images, […]
The concepts described in this blog article have now made it to RHEL 9.4. This is now called “RHEL Image Mode” and has been widely announced and demonstrated at Red Hat Summit 2024 in Denver. https://www.redhat.com/en/blog/introducing-image-mode-red-hat-enterprise-linux
https://www.redhat.com/en/blog/introducing-image-mode-red-hat-enterprise-linux