So there’s your Java code on the one side and your Kubernetes (K8s) cluster on the other.
There are quite a few approaches to get your code running in a pod on K8s. Let’s have a look at the general mechanics needed and then explore ways that’ll make life easier.
You can code along or check the examples on a GitHub repo created just for this article. For each approach, a separate project is available there.
As this article is quite comprehensive, here’s a table of contents for you in case you’re looking for something specific:
General Approach
To get Java code running in a pod, basically these four steps are mandatory:
- Create a Java Artifact
We basically need to create an artifact/multiple artifacts that can be executed. We’ll proceed with the most common case, an uber-jar, and discuss other cases (Java/Jakarta EE deployables, native executables) later. - Create a Container Image with the Artifact
Next, the artifact needs to be placed within a container image (we focus on OCI compatible ones1). It also needs to be started when the container starts, so somehow we need to have a Java runtime placed into the image. - Make the Image Available to K8s
The created image needs to be available to the targeted K8s cluster. That implies availability through a container image registry, be it local or in the internet’s wilderness. - Use the Image in a Pod
Finally we need a K8s pod running the image.
Ready to go? Let’s get started!
Prerequisites
I assume you have access to the following command-line tools and technology:
- docker (and/or->)
- (<- and/or) podman
- kubectl (and/or->)
- (<- and/or) oc
- helm (optional)
- JDK 17
- mvn (optional, you can follow along substituting the mvn examples with ./mvnw)
- pack
- A Kubernetes or an OpenShift cluster (local or in the wild2)
- Access to some sort of container image registry (Docker/podman local, public registries, private registries)
I. Create (Source Code for) the Java Artifact
We start with generating a simple Java application based on Quarkus. Just to feel the developer joy it provides ๐
So we run from the command line…
mvn io.quarkus:quarkus-maven-plugin:3.0.0.Final:create \
-DprojectGroupId=org.opensourcerers \
-DprojectArtifactId=java2pod \
-DclassName="org.opensourcerers.Java2PodResource" \
-Dpath="/api/java2pod"
3 …and get a folder with this structure:
Don’t worry – this article is not about the code at all!
Project Code Available At
To make things easier, we add
quarkus.package.type=uber-jar
To make things more interesting for later, I’ve changed the code in Java2PodResource.java from
package org.opensourcerers;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
@Path("/api/java2pod")
public class Java2PodResource {
@GET
@Produces(MediaType.TEXT_PLAIN)
public String hello() {
return "Hello from RESTEasy Reactive";
}
}
to
package org.opensourcerers;
import org.eclipse.microprofile.config.inject.ConfigProperty;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
@Path("/api/java2pod")
public class Java2PodResource {
@ConfigProperty(name = "environment.id", defaultValue="local")
String environmentId;
@GET
@Produces(MediaType.TEXT_PLAIN)
public String getEnvironmentId() {
return "Your environment ID is: " + environmentId;
}
}
Project Code Available At
Nothing spectacular. We b.t.w. don’t need to change the dependencies as Quarkus comes with an integrated Microprofile implementation called SmallRye.4
Let’s build our artifact:
mvn clean package -DskipTests # -U -e
We use./mvnw instead of mvn, but I’ve learnt not everybody is happy with that. I therefore use the shorter command but encourage you to try ./mvnw in case you encounter problems (all examples were tested with mvn 3.8.6, though).
Uncomment the latter arguments if you need for whichever reason re-download Maven artifacts.5
You should find a self-contained executable jar at /02-java2pod-extended/target/.6
Just let us try if it works locally:
java -jar target/java2pod-1.0.0-SNAPSHOT-runner.jar
We should see an output like this:
And should be able to access the REST service at http://0.0.0.0:8080/api/java2pod, either via curl (curl -w '\n' http://0.0.0.0:8080/api/java2pod) or in the browser.
We just ignore the UI at http://0.0.0.0:8080/7
Quit the application by entering ctrl+c.
To prove that the external configuration is working, we could optionally try this out:
export ENVIRONMENT_ID=dummy
java -jar target/java2pod-1.0.0-SNAPSHOT-runner.jar
The output from http://0.0.0.0:8080/api/java2pod should have changed now from “Your environment is: local” to “Your environment is: dummy”. Then, exit the application again by entering ctrl+c.8
Mission Nr. 1 accomplished!9
II. Create a Container Image with the Artifact
Now the fun part begins. We actually have several options and we will go through all of them.
The Hard Way: Plain (Docker|Container)file
Project Code Available At
As a developer, you shouldn’t do this frequently as it’s time consuming and keeps you away from coding. But it greatly helps you understand what other tools are fiddling around with.
Container Image Quickstart (Optional)
A Containerfile (or Dockerfile) is a recipe for a container runtime on how to build a container image.10
The general structure of a (Container|Docker)file is like this
FROM registry.access.redhat.com/ubi8/ubi-micro
CMD ["echo", "such a simple thing"]
Take a guess what this could mean! In case that’s too much for the moment, here’s a short explanation:
FROM
References a so-called base image. Here, I reference one of Red Hat’s Universal Base Images (UBI) which can be used freely and lead to enhanced security – good for production!
CMD
Runs this command when the image gets executed as a container image.
I hope you understand the general principle and structure:
# Comment
INSTRUCTION arguments
To create and then run this image, you need to put it into a file, named Dockerfile (conventionally recognized by default when in the current directory) and run
docker build . -t super-simple
The -t will apply a tag to the image to make it easier later to find it.
Or, alternatively you could run
podman build . -t super-simple
if you prefer using podman, a daemon-less docker
alternative. There’s a nice desktop implementation for it, Podman Desktop, that even helps you deal with pods on your local machine (!).
The docker command above will lead to an output like:
$>03-minimal-dockerfile>docker build . -t super-simple
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM registry.access.redhat.com/ubi8/ubi-micro
---> 853b8b14ac8a
Step 2/2 : CMD ["echo", "such a simple thing"]
---> Running in 6820a95d0679
Removing intermediate container 6820a95d0679
---> 268abc6693a7
Successfully built 268abc6693a7
Successfully tagged super-simple:latest
We can’t go into details here, but you should understand that this image is created based on layers which we could inspect11 and can be found for further use in (docker
|podman
) – either using the created tag super-simple or the short12 container ID, in this example 268abc6693a7 (will differ on your machine).
To run the image, we just run
docker run super-simple
and should see an output like this:
$>03-minimal-dockerfile>docker run super-simple
such a simple thing
Congrats on having built a container image nearly from the ground up.13 But we’re not done. This wasn’t hard, was it?
The Java Container Image
Project Code Available At
As simple as (Container|Docker)files seem, we need to consider that our uber-jar needs a Java runtime, otherwise it couldn’t be executed.
Luckily we got covered with Java-specific UBI-images that are already prepared with a Java runtime (OpenJDK).14.
A very simple approach thus is a Dockerfile like this:
FROM registry.access.redhat.com/ubi8/openjdk-17-runtime:latest
COPY target/*-runner.jar /deployments/quarkus-run.jar
EXPOSE 8080
USER 185
ENV JAVA_OPTS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
ENV JAVA_APP_JAR="/deployments/quarkus-run.jar"
Some notes here:
FROM
This references the base image. Be aware that Java-specific UBI images can vary: There are builder images that contain everything needed to build your application image from source and pure runtime images as the one used in this example.15
EXPOSE
Opens the port to our service. To enable debugging in the container, add 5005.
USER
Makes sure that we run the process under a dedicated UID.16.
ENV
Defines environment variables. JAVA_APP_JAR tells this specific image “type” where to find the jar.
To build the image, we first build the image (Dockerfile in the same directory):
docker build . -t java2pod-extended
And then run it:
docker run -p 8080:8080/tcp java2pod-extended
So we should see output like this…
…and be able to interact with the example service as before (e.g. in the web browser or another terminal with curl). Stop the container with ctrl+c.
To run the container in the background, we use -d as parameter:
docker run -d -p 8080:8080/tcp java2pod-extended
And stop it by first searching for the container via docker ps, copying the container id from the output (actually the first few characters are sufficient) and then executing docker stop <container id>, i.e.:
Hint: make sure to check containers are running with docker ps. docker ps -a will show you the locally available containers even if not running.
There’s definitely more to this approach, but I hope you got a basic impression.17
The Easy Way: Use Jib!
Obviously the Dockerfile approach is not the most convenient. Useful for basic understanding, but definitely not what most developers continuously want to deal with.
Fortunately, there’s an approach that’s much more intuitive. Meet JIB.
Jib (Java Image Builder?18) is a project founded by Google. It’s quite comprehensive and actually does much more than just building the image – it also can push the image to a container registry, supports Maven and Gradle, comes with its own CLI etc. etc. The base process gets drastically reduced:
We only have a look from the Maven plugin side, but the approach for Gradle is fully comparable.19
Bare Jib
Project Code Available At
First, we need to add the Jib Maven plugin to our pom.xml and adjust the configuration, in this case for Quarkus, which needs a Jib extension to build the image:
<build>
<plugins>
<!-- more plugins -->
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>3.3.1</version>
<dependencies>
<dependency>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-quarkus-extension-maven</artifactId>
<version>0.1.1</version>
</dependency>
</dependencies>
<configuration>
<to>

</to>
<container>
<!-- special case for Quarkus to suppress warnings-->
<mainClass>bogus</mainClass>
<jvmFlags>
<flag>-Dquarkus.http.host=0.0.0.0</flag>
<flag>-Djava.util.logging.manager=org.jboss.logmanager.LogManager</flag>
</jvmFlags>
<ports>
<port>8080</port>
</ports>
<user>185</user>
</container>
<pluginExtensions>
<pluginExtension>
<implementation>
com.google.cloud.tools.jib.maven.extension.quarkus.JibQuarkusExtension
</implementation>
<properties>
<packageType>fast-jar</packageType>
</properties>
</pluginExtension>
</pluginExtensions>
</configuration>
</plugin>
<!-- more plugins ? -->
</plugins>
</build>
The easiest way then – which will not work at the moment for our example – is to run Jib with Maven as follows:
mvn compile jib:build
This is because Jib directly wants to push the created image to the registry and we haven’t set up this so far. Actually, we’re just at step 2: building the image, right? The command that does the trick is:
# run
# mvn clean quarkus:build
# before that!
mvn compile jib:dockerBuild
This tells Jib to only run a local image build (with a running Docker daemon in the background). If you prefer Podman like me (daemon-less, rootless, handy, and fully open source), you need to tweak the command even further:
mvn compile jib:dockerBuild -Djib.dockerClient.executable=$(which podman)
We might have a look at what has been produced with (docker|podman) images. If you wonder why your image is displayed to be more than 50 years old, have a look at https://reproducible-builds.org/ and here.
We should then be able to run our image with:
docker run -p8080:8080/tcp java2pod-extended-jib-base
(in this case with docker).20
Jib the Quarkus Way
Project Code Available At
The above example (05.1) might give you the impression that setup and configuration is a bit clumsy.
In fact, for Quarkus there’s built-in Jib support within Quarkus, whereas for Spring Boot Jib has built-in support. Let’s start with Quarkus. All you need to do is adding this dependency to pom.xml:21
<dependencies>
<!-- (...) -->
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-container-image-jib</artifactId>
</dependency>
<!-- (...) -->
</dependencies>
With the added dependency, we run:
mvn install -Dquarkus.container-image.build=true # -DskipTests
The property can also be set in application.properties. Of course ๐
This creates an image with a new name structure that we need to consider before running it:
When executing (docker|podman) images, we see it’s <username>/<artifactId>22. Let’s do this:
docker run -p8080:8080/tcp karsten/java2pod:1.0.0-SNAPSHOT
Again, we should see the running container and be able to query the “API”. Stop the container with ctrl+c. Mission accomplished.
A Jib-Spring Boot Example
Project Code Available At
To show how easy Jib can be in case it supports frameworks such as Spring Boot natively, just check out this example:
We create a simple spring-boot-starter-web project via Spring Initializr as such:
Then we add the Jib build dependency in pom.xml:
<build>
<plugins>
<!-- (...) -->
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>3.3.1</version>
</plugin>
</plugins>
</build>
We then run:
mvn jib:dockerBuild
The image gets created without any hassle:
The image then can be run with:
docker run -p8080:8080/tcp java2pod-spring-boot:0.0.1-SNAPSHOT
Notice that in this case the 0.0.1-SNAPSHOT tag has been created automatically. By opening http://0.0.0.0:8080/ we should get Spring’s Whitelabel Error Page indicating there’s no URL mapping at all.
Very easy, isn’t it? Stop the container with ctrl+c.
(Cloud Native) Buildpacks
Cloud Native Buildpacks (here referred to as CNB, often referred to as just Buildpacks) is another approach getting from source to image. It’s a project initially spawned by Heroku in 2011. It joined in 2018 the Cloud-Native Computing Foundation and finally became an incubating project thanks to the joint efforts of Heroku and Pivotal. So it has quite a history and so far adopted flexibly to upcoming new standards and specifications such as OCI or Docker registry v2, etc.
CNB‘s basic approach is to get developers away from writing (Container|Docker)files. Instead, the comprehensive “background” tooling inspects the source code and then tries to build the image. Of course we can tweak everything to our liking, write our own buildpacks or extensions. It even goes far beyond building, integrates SBOM support and way much more to discover. Let’s take a first dive into CNB!
As Quarkus supports CNB via dependency, we discover first the basic, then the Quarkus-specific approach.
Buildpacks – Basic Approach
Project Code Available At
Prerequisite: We need to install pack, CNB‘s CLI. We make ourselves comfortable with the CLI and assure Docker is running as daemon.23
In our directory, we run:
pack build java2pod-extended-buildpacks-basic --builder paketobuildpacks/builder:tiny
So the basic syntax is
pack build <name of the image to be created> --builder <builder reference>
We can get a list of suggested builders via
pack builder suggest
NB the behavior of the builders can vary dramatically. Here we take the “tiny” builder from paketo.
After running the above command, we should see output like such:
With a final success message like so:
We see, there’s a bunch of stuff going on and to dive in more deeply, we need to go through the comprehensive documentation at https://buildpacks.io/docs/.
We can check with docker images that our image has been created and can start it with:
docker run -p8080:8080/tcp java2pod-extended-buildpacks-basic:latest
Also check that the endpoint is working. After that, stop the container with ctrl+c. If you wonder how to finetune the image build, have a look at https://buildpacks.io/docs/app-developer-guide/specify-buildpacks/.
You b.t.w. might notice that after stopping the container we see output not seen before:
This is caused by the specific image used.24
Buildpacks – Quarkus Approach
Project Code Available At
Quarkus seems to follow a “as much as possible just through dependencies” approach and Buildpack integration doesn’t differ from it!
There’s just one thing to be specified at the beginning and that is the type of the builder image:
# specifiy quarkus.buildpack.jvm-builder-image=<jvm builder image>, e.g.:
quarkus.buildpack.jvm-builder-image=paketobuildpacks/builder:tiny
As in the other examples, we add this dependency to pom.xml25 :
<dependencies>
<!-- (...) -->
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-container-image-buildpack</artifactId>
</dependency>
<!-- (...) -->
</dependencies>
Given that, we execute:
mvn install -Dquarkus.container-image.build=true
This will create an image following the same naming convention as in the Jib/Quarkus example above:
<username>/java2pod:1.0.0-SNAPSHOT
So in my case it’s karsten/java2pod:1.0.0-SNAPSHOT.
To prove that our approach successfully runs, we could remove any previously built image with the same Docker repository/tag combination via docker image rm <repository/ta, e.g. karsten/java2pod:1.0.0-SNAPSHOT> or change the artifact name in pom.xml.
Then, as always, we run the container to see it works:
docker run -p8080:8080/tcp karsten/java2pod:1.0.0-SNAPSHOT
And finally stop the container with ctrl+c.
Source-to-Image (S2I) – Locally
Project Code Available At
If you’re familiar with OpenShift, a CNCF-certified Kubernetes distribution, you might have heard about Source-to-Image (S2I), an approach to easily create a pod based just by handing over the Git repo’s URL. Comparable to CNBs (but in a way more minimalistic/focused), S2I uses a builder image to identify the technology to be compiled and to create the final image.
This technology is built into OpenShift. But it also can run locally with a CLI.
Prerequisite: Grab the latest release from https://github.com/openshift/source-to-image/releases and add it to your path so you can run it from your terminal.26
We then run
docker pull registry.access.redhat.com/ubi8/openjdk-17:latest
As you can see here, this is a S2I builder image. It differs from the “pure” runtime image we used as base image when creating the image with the Dockerfile.
The image build is executed with
s2i build . registry.access.redhat.com/ubi8/openjdk-17:latest java2pod-s2i-local
We check e.g. with
docker images | grep java2pod-s2i-local
the repository and tag information. In this case, the local repository name is the image name and tag is “latest”. All that can be tweaked.
We run the container with:
docker run -p8080:8080/tcp java2pod-s2i-local:latest
And check it’s working with:
curl -w '\n' http://0.0.0.0:8080/api/java2pod
and finally stop the container with ctrl+c.
III. Make the Image Available (Registries)
Having explored various approaches to create an OCI compatible image, the next step is to make it available for Kubernetes (K8s).
For this we can either use the (Docker-|Podman-)local images on a single-node K8s27, or push them to either public image registries, such as DockerHub or Quay, or use a dedicated private registry28, such as Harbor (CNCF graduated!), Nexus, Artifactory, or use the private registry from OpenShift29
The base proceeding then is as follows: the image is specified in your Pod object and gets pulled from the registry to the local node where the container is going to be instantiated.
See this example:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Source: https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/simple-pod.yamlFor our next steps, we take the Java image manually created from the “hard way” approach, to achieve a basic understanding of the actions which later on will be automated through various tools.30
DockerHub
Project Code Available At
Prerequisite: Existing (free) account at https://hub.docker.com (Docker Hub).
First we need to create a repository on Docker Hub:
You normally would link one image type with one repository and have the ability to add multiple images with different tags31, e.g.
organization/specificapp:testing
organization/specificapp:1.0.0
organization/specificapp:qa
organization/specificapp:bla
organization/specificapp:blup
Hope you get the idea.
In our project directory, we then need to create a build that matches our username (which in my case is “gresch”, see above):
docker build . -t gresch/java2pod-extended
The image will automatically be tagged with the tag “latest”. If you want to change this, ad a colon, followed by the tag, e.g.:
docker build . -t gresch/java2pod-extended:1.0.0-SNAPSHOT
We then login locally to Docker Hub and finally push the image to Docker Hub:
docker push gresch/java2pod-extended:latest
We should see the pushed image then on Docker Hub:
If you click here on the Tags tab, you’ll see at least the latest tag.
Well done – our image is now publicly available32 and ready to be pulled!
Quay
Project Code Available At
Prerequisite: Existing (free) account at https://quay.io (Red Hat Quay.io, here referred to as just quay.io).
quay.io is a public image registry powered by the open source project Project Quay and basically works quite equally to Docker Hub. You can run it locally with containers, or deploy it to your own K8s with the Quay Operator, but we will use the public offering for this example.
quay.io comes with a rather sophisticated organizational/permissions setup (organization/repository/tags, see here), but we’ll keep it simple for this example.
Now, let’s got to the basic java-docker project folder and run some commands33:
docker build . -t quay.io/gresch/java2pod-extended:latest
With the -t flag we specify the tag and this is sufficient for creating the repository.
For applying the tag to an existing image, we had to get the image ID34:
docker images | grep java2pod-extended
gresch/java2pod-extended 1.0.0-SNAPSHOT 62e0943d5f8d 12 hours ago 415MB
gresch/java2pod-extended latest 62e0943d5f8d 12 hours ago 415MB
java2pod-extended latest 62e0943d5f8d 12 hours ago 415MB
quay.io/gresch/java2pod-extended latest 62e0943d5f8d 12 hours ago 415MB
And apply the tag manually35.
Finally, we push the image to quay.io:
docker push quay.io/gresch/java2pod-extended:latest
Now our image is available to be pulled!
Private Registries
General Thoughts
The approach using these two public registries can be completely applied to private image registries. All you need to do is find out the structure of the “repository URI”. Harbor36 e.g. uses projects instead of users/organizations, so we need to specify this:
docker images | grep java2pod-extended
docker login demo.goharbor.io
# change the image ID here!
docker tag 62e0943d5f8d demo.goharbor.io/java2pod/java2pod-extended:latest
docker push demo.goharbor.io/java2pod/java2pod-extended:latest
So, instead of the username, a project name (here: java2pod) is used (which you need to change if you want to go this path).
So whether you use Nexus, Artifactory, Harbor, or an internally operated version of Docker Hub or Quay – the approach is basically the same.
OpenShift Private Registry
Project Code Available At
Prerequisite: Accessible OpenShift cluster with cluster-admin permissions (!).
I often get questions about how to leverage the built-in container image registry of OpenShift for local development. As the internal registry is created by an operator which conveniently sets up a default route, the setup is quite easy37:
1. We need to grant permissions for accessing the internal registry:
# pull permission
oc policy add-role-to-user registry-viewer <username>
# push permission
oc policy add-role-to-user registry-editor <username>
2. Get the (external) registry route or expose one:
oc get routes -n openshift-image-registry
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
default-route default-route-openshift-image-registry.apps.ocp4.mydomain.mytld image-registry <all> reencrypt/Allow None
3. Login to the registry (docker/podman):
docker login -u `oc whoami` -p `oc whoami --show-token` default-route-openshift-image-registry.apps.ocp4.mydomain.mytld
4. Create a project (==K8s namespace) and an image stream for it in OpenShift:
oc new-project java2pod
oc create imagestream java2pod
And an image stream:
oc new-project java2pod
The rest should feel quite familiar now:
docker build . -t default-route-openshift-image-registry.apps.ocp4.devworkshop.cc/java2pod/java2pod-extended:latest
With the -t flag we specify the tag and this is sufficient for creating the repository.
We now need to get the image ID, which in this case is 62e0943d5f8d and will differ on your computer:
docker images | grep java2pod-extended
gresch/java2pod-extended 1.0.0-SNAPSHOT 62e0943d5f8d 12 hours ago 415MB
gresch/java2pod-extended latest 62e0943d5f8d 12 hours ago 415MB
java2pod-extended latest 62e0943d5f8d 12 hours ago 415MB
quay.io/gresch/java2pod-extended latest 62e0943d5f8d 12 hours ago 415MB
Finally, we push the image to quay.io:
docker push default-route-openshift-image-registry.apps.ocp4.devworkshop.cc/java2pod/java2pod-extended:latest
You can test the cluster-local availability e.g. via the OpenShift console:
- Select the Add button, then Container Images to the right hand side to see the form depicted above.
- Select Image stream from internal registry.
- Select our project, the image stream we have created before and the tag.
- We could even change the icon38 ๐
- Important! Change the Resource type to Deployment in case you have OpenShift serverless running on the cluster – unless you really want your application scale down automatically.
- Enter create.
After a while you should be able to access the application via the created route.
Automating the Push
We should have grasped an understanding of how to push our image to a registry. For day-to-day work this seems to be cumbersome, though. Fortunately, developer-oriented tooling gets us covered!
Jib
Basic Setup
Project Code Available At
Remember our first basic Jib try above? When running mvn compile jib:build it didn’t work due to missing credentials:
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 7.006 s
[INFO] Finished at: 2023-05-13T16:44:47+02:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:3.3.1:build (default-cli) on project java2pod: Build image failed, perhaps you should make sure your credentials for 'registry-1.docker.io/library/java2pod-extended-jib-base' are set up correctly. See https://github.com/GoogleContainerTools/jib/blob/master/docs/faq.md#what-should-i-do-when-the-registry-responds-with-unauthorized for help: Unauthorized for registry-1.docker.io/library/java2pod-extended-jib-base: 401 Unauthorized
If you read the message carefully, you see that Jib tried to push the image to:
registry-1.docker.io/library/java2pod-extended-jib-base
And we’re not user “library” at all! So it was only a half-truth… What we want to do in our project directory, is the following.
First, we make sure to have done the login to the desired registry at the command line.39
Next, in pom.xml we specify the desired image registry URL and image name as described above. E.g.
<!-- XPath: /project/build/plugins/plugin[5]/configuration -->
<!-- (...) -->
<configuration>
<to>
<!--  -->
<!-- or : -->

</to>
<!-- (...) -->
for pushing to quay. Then, it becomes super-easy!
mvn compile quarkus:build
mvn compile jib:build
Quarkus-specific setup
Project Code Available At
You might remember that for Quarkus we just had to specify a dependency and one parameter at the command line or in application.properties. We continue following this approach and need to customize our setup a bit for the quarkus application. In this case, I prefer to use application.properties. Here, we add:
quarkus.package.type=uber-jar
quarkus.container-image.push=true
quarkus.container-image.registry=quay.io # adjust
quarkus.container-image.group=gresch # adjust!!!
quarkus.container-image.name=java2pod-extended-jib-quarkus-push
All we need to do (after login to the desired registry, see above) is now
mvn install
and the image should get pushed to desired registry.
Buildpacks (CNB)
Basic Approach
Project Code Available At
You might remember what we did for creating our image with Cloud Native Buildpacks (CNB aka just Buildpacks)? We ran40
pack build java2pod-extended-buildpacks-basic --builder paketobuildpacks/builder:tiny
All we need to do now is (successful container registry login assumed) to specify the registry correctly in the image reference and add a --publish flag to the build command41 :
pack build quay.io/gresch/java2pod-extended-buildpacks-basic-push --builder paketobuildpacks/builder:tiny --publish
The image should then have been pushed to the registry.
Quarkus Approach
Warning – this approach seems currently to be failing, see https://github.com/quarkusio/quarkus/issues/23844 and skip for now!
Again, basically all we need to do is specify the image reference so it can be pushed to the container registry (and make sure we can access it). As you might remember, the Quarkus approach for Buildpack was to just add a dependency. As in the Jib-Quarkus example for pushing to a registry, we specify the image reference etc. in application.properties:
quarkus.package.type=uber-jar
quarkus.buildpack.jvm-builder-image=paketobuildpacks/builder:tiny
quarkus.container-image.registry=quay.io #change accordingly
quarkus.container-image.group=gresch # change accordingly
quarkus.container-image.name=java2pod-extended-buildpacks-quarkus-push
quarkus.container-image.push=true
Then, we run:
mvn install -Dquarkus.container-image.build=true
If you wonder why we haven’t put quarkus.container-image.build=true into application properties – this is to avoid nested build attempts42.
The image should be built and then pushed to the desired registry.
IV. Create a Pod With the Image (K8s)
We finally come to a close! It was quite a way, but we’re not done yet: We want to use the artifact running in a container on Kubernetes (K8s).
The Hard Way: K8s YAML
We won’t go too much into this – this part should give you an impression about the complexity and what you had to deal with when approaching K8s manually.
In this example we pull the image from an external registry (not a local one on the K8s node) and thus the reference to “gresch” needs to be replaced:
---
apiVersion: v1
kind: Service
metadata:
name: java2pod
labels:
app: java2pod
spec:
type: NodePort
selector:
app: java2pod
ports:
- protocol: TCP
port: 8080
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: java2pod
spec:
selector:
matchLabels:
app: java2pod
replicas: 1
template:
metadata:
labels:
app: java2pod
spec:
containers:
- name: java2pod
image: quay.io/gresch/java2pod-extended:1.0.0-SNAPSHOT
ports:
- containerPort: 8080
Here we define two objects: a Service and a Deployment (in java2pod-service-and-deployment.yaml). We can apply it against our K8s instance43 and play a bit around. But there are many specialties such as the Ingress addon, which I won’t cover here. You can also work against an OpenShift instance with kubectl.44
We apply this through:
kubectl apply -f java2pod-service-and-deployment.yaml
Which would apply this to the default namespace45 Later on we can check that our Pod has been created:
kubectl get pods
But we wouldn’t be done yet: The Pod probably needs to be made available from the outside through an Ingress, we might need to add health checks and so forth.
For this article, I like to leave you with the impression that there’s a lot to learn before you can do K8s the hard way. There must be a better way, focussed on developers.
Helm Charts to the Rescue?
You might have heard about Helm, which calls itself “The package manager for Kubernetes” and claims that Helm Charts are capable of helping you define, install, and upgrade even the most complex K8s application.
Let’s have a look at what’s behind all this:
Following the Quarkus way, we start with adding two dependencies to pom.xml46:
<dependency>
<groupId>io.quarkiverse.helm</groupId>
<artifactId>quarkus-helm</artifactId>
<version>1.0.6</version>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-kubernetes</artifactId>
</dependency>
One is for Helm Chart generation, the other for generating K8s-specific files – if we had used the OpenShift extension, OpenShift-specific files would be created.
Running
mvn clean package
Will do the heavy generation for us and generate the files to target/helm/kubernetes/java2pod, namely to the /templates subdirectory:
When looking into these files, we see that they look a bit like the content from our “K8s hard way” approach:
---
apiVersion: v1
kind: Service
metadata:
annotations:
app.quarkus.io/build-timestamp: 2023-05-14 - 19:57:58 +0000
app.quarkus.io/commit-id: 9a73b69f298ba04885c26ee883479c3962547a08
labels:
app.kubernetes.io/name: java2pod-helm
app.kubernetes.io/version: 1.0.0-SNAPSHOT
app.kubernetes.io/managed-by: quarkus
name: java2pod-helm
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app.kubernetes.io/name: java2pod-helm
app.kubernetes.io/version: 1.0.0-SNAPSHOT
type: {{ .Values.app.serviceType }}
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.quarkus.io/build-timestamp: 2023-05-14 - 19:57:58 +0000
app.quarkus.io/commit-id: 9a73b69f298ba04885c26ee883479c3962547a08
labels:
app.kubernetes.io/version: 1.0.0-SNAPSHOT
app.kubernetes.io/name: java2pod-helm
app.kubernetes.io/managed-by: quarkus
name: java2pod-helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/version: 1.0.0-SNAPSHOT
app.kubernetes.io/name: java2pod-helm
template:
metadata:
annotations:
app.quarkus.io/build-timestamp: 2023-05-14 - 19:57:58 +0000
app.quarkus.io/commit-id: 9a73b69f298ba04885c26ee883479c3962547a08
labels:
app.kubernetes.io/version: 1.0.0-SNAPSHOT
app.kubernetes.io/name: java2pod-helm
app.kubernetes.io/managed-by: quarkus
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: {{ .Values.app.image }}
imagePullPolicy: Always
name: java2pod-helm
ports:
- containerPort: 8443
name: https
protocol: TCP
- containerPort: 8080
name: http
protocol: TCP
To proceed, we had to understand how to build Helm charts, maybe create a Helm repository to make it consumable to others etc.
This basically means we had not only to understand K8s in depth, but also the extensive (and definitely powerful) Helm chart syntax and Helm’s concepts. We also had to take care of image building, specifying the image reference and so on and so forth.
That’s even more to master! Therefore, for our purposes, Helm is a misdirection as we won’t become K8s and Helm experts overnight.47
Dekorate?
Another approach to just generate the files needed for K8s is dekorate. The main approach is that you annotate your code with various stuff (K8s config, Helm, Knative, Jaeger, Prometheus, even special annotations for Minishift and Kind are available!) like so:
import io.dekorate.kubernetes.annotation.KubernetesApplication;
@KubernetesApplication
public class Main {
public static void main(String[] args) {
//Your application code goes here.
}
}
Dekorate would then generate the needed resources to deploy your application. You can also follow an annotationless approach when using Spring Boot (only).
But that’s still not what we want: still, we had to take care of applying the generated resources, still we needed to have in-depth knowledge about what to specify, still we had to take care of image handling. There must be a better way…
Full Automation with JKube
Meet JKube. JKube is a project of the Eclipse Foundation. Its purpose is to support in building “Cloud-Native Java Applications without a hassle”.
The approach is basically:
- You add dependencies to a Kubernetes/OpenShift Maven/Gradle plugin.
- You get enabled to go through the entire Java to Pod lifecycle as described in the build goal documentation .
So let’s see how this works! All we have to is basically adding the JKube Maven plugin to pom.xml:
<plugins>
<!-- (...) -->
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<version>1.12.0</version>
</plugin>
<!-- (...) -->
</plugins>
This is how we can build an image – in this case we specify Jib as the builder (handy, isn’t it?):
mvn k8s:build -Djkube.build.strategy=jib -Djkube.generator.name="quay.io/gresch/%a:%l"
And we push the image with
mvn k8s:push -Djkube.generator.name="quay.io/gresch/%a:%l"
And finally we can generate all resources needed to run our application in a Pod on K8s!
mvn k8s:resource k8s:apply -Djkube.generator.name="quay.io/gresch/%a:%l" -Djkube.namespace=j2p-jkube -Djkube.createExternalUrls=true -Djkube.domain=apps.ocp4.devworkshop.cc
Some explanations here:
jkube.generator.name=”quay.io/<user-/orgname>/%a:%l”: Specifies the image URL (incl. tag) at the external repository.
jkube.namespace=<K8s namespace>: The namespace on the K8s cluster
jkube.createExternalUrls=true: Automatically creates the Ingress routes.
jkube.domain=mydomain.mytld: The external URL under which your application shall be available
If you wonder if you could specify this CLI parameters in a file – yes, you got covered: You need to add it to the plugin specification in pom.xml. Find a full-blown example here.
You could dive in extremely deeply (such as with dekorate or Helm) and add specific YAML to src/main/jkube
which then is used to “enrich” the generated configuration. See the documentation here.
From my point of view, JKube gives you the best of both worlds:
- Intuitive Java approach, starting with Maven/Gradle plugin.
- Full-lifecycle support.
- Dedicated to Kubernetes/OpenShift.
- From zero-config over XML to YAML.
OpenShift Ease With S2I
As a final option for OpenShift users48, the built-in should not be forgotten.
If we log in to OpenShift and create a project in OpenShift terms (==K8s namespace), e.g.
oc new-project j2p-s2i
oc new-app registry.access.redhat.com/ubi8/openjdk-17:latest~https://github.com/karstengresch/java2pod.git \
--context-dir=02-java2pod-extended
Then all needed configuration (Deployment, Service, ConfigMaps, Secrets) is generated automatically and we only need to make the application accessible:
oc expose service/java2pod
oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
java2pod java2pod-j2p-s2i.apps.ocp4.devworkshop.cc java2pod 8080-tcp None
This is another, really easy way to get from code to Pod.
Please note that registry.access.redhat.com/ubi8/openjdk-17:latest specifies an S2I builder image, followed be a tilde sign (~), followed by the Git repo URL and specifying a subdirectory (full documentation here).
Java to Pod: Interactive with odo
There’s even more! If you prefer to work on your code and at the same time interact with a Kubernetes (or OpenShift) cluster, have a look at odo.
It aims to support the developer experience by making it easy to generate resources and deployments directly from the command line.
As odo is language agnostic and something to be integrated perhaps later on in your workflow, we leave it with an honorary mention.
Conclusion
The journey to get a Java application from source code to running in a Kubernetes pod can become quite tedious. Fortunately, Java developers don’t need to become neither Docker gurus nor K8s administration experts: Java-based tooling like Jib and JKube allow for easy image creation, registry push, and K8s deployment without much hassle.
Especially JKube, a project of the Eclipse foundation, not only supports the entire Java to Pod lifecycle, but also to specify everything from the ground up – beside a zero configuration approach. This enables developers to dive deeper into the intricate world of K8s object configuration step-by-step.
Outlook
In a follow-up article, we’ll have a look at automating and mixing and mangling all the above steps using K8s-native tools: Tekton and ArgoCD.
Feedback Welcome!
Love it? Hate it? Leave some comments below!49
- and e.g. not e.g. LXC or nspawn [↩]
- you can get a free, 30-days-running instance from https://developers.redhat.com/developer-sandbox or run kind or minikube or – when using Quarkus – toggle devservices k8s support [↩]
- b.t.w. – if you really want to experience Quarkus’ developer joy, make sure you install the Quarkus CLI, e.g. with
sdk install quarkus
– the above command would be made much easier withquarkus create (...)
! [↩] - Find the sourcecode at https://github.com/karstengresch/java2pod/tree/main/02-java2pod-extended if you’re… in a hurry ๐ [↩]
- e.g. having errors like “Plugin org.apache.maven.plugins:maven-surefire-plugin:x.x.x or one of its dependencies could not be resolved: org.apache.maven.plugins:maven-surefire-plugin:jar:x.x.x was not found in https://repo.maven.apache.org/maven2 during a previous attempt. This failure was cached in the local repository and resolution is not reattempted until the update interval of central has elapsed or updates are forced“. [↩]
- see https://quarkus.io/guides/getting-started#packaging-and-run-the-application [↩]
- If you want to know more about the new Quarkus Dev UI, check out this introduction from Phillip Krรผger https://github.com/quarkusio/quarkus/wiki/Dev-UI-v2 [↩]
- If you are curious how all this stuff works, have a look at: https://quarkus.io/guides/config-reference#system-properties. [↩]
- You can copy the application to another directory or grab it from here: https://github.com/karstengresch/java2pod/releases/download/0.1/java2pod-1.0.0-SNAPSHOT-runner.jar, but I recommend following the article step-by step or cherrypicking the repositories of the articles you’re interested in. [↩]
- We won’t go over the details, but have a look at this quite comprehensive reference here: https://docs.docker.com/engine/reference/builder/ [↩]
- e.g. with podman inspect when having used podman build for creating the image. podman b.t.w. uses Skopeo as a library to perform such tasks. [↩]
- Get the full container ID with docker ps --not-trunk to learn even more ๐ [↩]
- Side note: the container normally can only run on the architecture it was built on! [↩]
- In case you want to know what’s in such an image, just have a look at this Dockerfile – you see that things becoming more complex under the hood. You could even inspect the ubi8-mininal base image’s Dockerfile . If you want to dig in even more deeply and wonder what this FROM koji/image-build “base image” is, check this article on base images out. But this is far beyond people interested in coding normally go… [↩]
- Learn more about it from this article: https://developers.redhat.com/articles/2021/05/24/build-lean-java-containers-new-red-hat-universal-base-images-openjdk-runtime#a_runtime_image [↩]
- 185 is historical for jboss/wildfly [↩]
- To go deeper, here are some challenges for you:
1. Inspect /src/main/docker/Dockerfile.jvm. Hint: this file is not made for an uber-jar, but for a library-dependent jar, see the explanation here: https://quarkus.io/guides/getting-started#packaging-and-run-the-application
2. Learn about native compilation and reflect upon a) the changes needed for the Dockerfile b) the advantages for operations. [↩] - Assuming this is what the name stands for, but couldn’t find a reliable source. [↩]
- We skip the CLI approach, but you can read more about it here: https://github.com/GoogleContainerTools/jib/tree/master/jib-cli. [↩]
- If it fails with a weird exception message it’s probably because you use a Docker Desktop version prior to 20.0.14, see https://github.com/adoptium/temurin-build/issues/2974. [↩]
- This can also be achieved by running:
mvn quarkus:add-extension -Dextensions='container-image-jib'
This command adds the dependency to pom.xml../mvnw or even the quarkus CLI tool, which is available via SDKman would also work b.t.w. [↩]
- Of course, we could just use the image ID. [↩]
- There is also support for Podman, but it’s a bit tricky: https://buildpacks.io/docs/app-developer-guide/building-on-podman/ – check the known issues & limitations before you start! [↩]
- As a homework: try to change the base image to ubi ๐ [↩]
- Or we run
mvn quarkus:add-extension -Dextensions='container-image-buildpack'
[↩]
- macOS: brew install source-to-image should work, too. [↩]
- if you do so, please be aware to set imagePullPolicy: Never, see https://stackoverflow.com/questions/60228643/use-local-docker-image-without-registry-setup-in-k8s, or in depth: https://medium.com/swlh/how-to-run-locally-built-docker-images-in-kubernetes-b28fbc32cc1d [↩]
- both Docker and Quay can be operated privately, too! [↩]
- or others listed in the CNCF landscape [↩]
- In real life, you rarely would see just the Pod – it’d be live surrounded by a myriad of other workload resources, such as Deployments, Stateful Sets, Daemon Sets, Jobs, Cron Jobs etc. [↩]
- see https://docs.docker.com/docker-hub/repos/create/ [↩]
- Hint – make sure you just do this with example apps and not to publish stuff of your company, if applicable. [↩]
- regarding the latest tag see the hints above [↩]
- which in this case is 62e0943d5f8d and will differ on your computer [↩]
- docker tag <image ID, here 62e0943d5f8d> <tag, here quay.io/gresch/java2pod-extended:latest> [↩]
- try it out via https://demo.goharbor.io/ or with a local setup with kind and Helm: https://serverascode.com/2020/04/28/local-harbor-install.html, in general, the setup allows an overwhelming amount of configuration parameters and demands a separate, very long article. [↩]
- details see here: https://docs.openshift.com/container-platform/4.12/registry/accessing-the-registry.html [↩]
- see https://github.com/redhat-developer/app-labels/blob/master/labels-annotation-for-openshift.adoc, you even could do this later from the command line:
oc label deployment quarkus-native app.openshift.io/runtime=quarkus --overwrite
[↩] - normally local login is sufficient when developing, but check out further options here [↩]
- for this example we can use the same project directory b.t.w. – Buildpacks’ basic approach is quite uninvasive. [↩]
- Hint: If the Maven build fails with an error message like Could not transfer artifact – just retry again. [↩]
- https://quarkus.io/guides/container-image#buildpack – at the end [↩]
- E.g. we could spin up a K8s instance with minikube; then minikube start [↩]
- Hint: try this out with the Developer Sandbox! [↩]
- Create an individual namespace with e.g.
kubectl create namespace j2p-yaml
and apply by specifiying it with
kubectl apply -f java2pod-service-and-deployment.yaml -n j2p-yaml
[↩]
- You should be familiar enough now to do this at the command line, if not, check the initial Quarkus examples [↩]
- The same would apply to Kustomize, which had the advantage to not being forced learning Helm-specific DSL and be integrated into K8s’ CLI,
kubectl
. [↩] - Again: as developer, just try the Developer Sandbox to get quite a holistic experience and an environment running for 30 days [before needing to get reprovisioned] [↩]
- This article was first published at https://gresch.io/2023/05/java-to-pod/ [↩]