Introduction
This article is a step by step example, how to develop, build and deploy a cloud native application with its infrastructure components. This cloud native application uses OpenShift as the underlying application platform with following features:
- Red Hat Integration – AMQ Streams
- Red Hat OpenShift GitOps
- Red Hat OpenShift Pipelines
- Red Hat OpenShift Serverless
The use case is a feedback form, where users can give their feedback.
The repositories for this application are:
- The GitOps config repository: https://github.com/marcoklaassen/feedback-form-config
- The Feedback-Form-API application repository: https://github.com/marcoklaassen/feedback-form-api
- The Feedback-Persistence-API application repository: https://github.com/marcoklaassen/feedback-persistence-api
Architecture
Main application components
The use case is very simple and could definitely be implemented in a simpler architecture. But I decided to use the following architecture as a showcase for distributed microservices while using Kafka. The microservices are deployed as KNative services, so they are all able to scale to zero. To install and configure the KNative environment on OpenShift I used the RedHat OpenShift Serverless operator. Because of developer experience reasons I decided to use Quarkus as the application development framework for my microservices. Those microservices are built with Tekton by using the Red Hat OpenShift Pipelines Operator to configure the pipelines. ArgoCD is responsible for the deployment of most of all the Kubernetes resources and is configured and deployed by the Red Hat OpenShift GitOps Operator.
The app itself is divided into basically three components:
- feedback-form-api (Form-API)
- feedback-persistence-api (Persistence-API)
- feedback-form-ui (UI)
Environment components
The Form-API receives simple HTTP requests with ratings (e.g. {rating: 5}
) and writes this data from the request to a Kafka topic. The Persistence-API is a Kafka consumer and receives the rating data, sends it to a MariaDB database and provides an endpoint to display all ratings as a JSON array. The Kafka instance is installed and configured by the Red Hat Integration – AMQ Streams operator. The UI isn’t implemented yet. But if any of you would like to add a UI for this use case, I would appreciate it. Maybe I will also find some time in the future to add a simple UI for this project.
The are also a few components which are responsible for the
- feedback-form-cicd (CICD Pipeline)
- feedback-form-infrastructure (Kubernetes Infrastructure)
The CICD Pipeline components includes
- the Tekton Pipeline
- a Maven cache for the pipeline
- the
TriggerTemplate
and theTriggerBinding
- the
EventListener
and its route.
The pipeline is designed to build Quarkus apps in general. So there is no need to define a separate pipeline for every Quarkus service. The Maven cache accelerates the pipeline-runs because it caches all the dependencies downloaded from the Maven sources each build. The TriggerTemplate
and TriggerBinding
translate the parameters from the GitHub webhook to a pipeline run. And the EventListener
and its route are responsible for receiving the call from the GitHub repository in case of a new commit. Such a call from the GitHub repository triggers a new pipeline-run by using the TriggerBinding
and TriggerTemplate
.
The Kubernetes Infrastructure components contains the
- MariaDB deployment and the
- Kafka Instance, Kafka Topic and Kafka User deployment.
The MariaDB deployment persists the ratings and the Kafka instance, the topic and the user provide the streaming layer to send the ratings from the Form-API to the Persistence-API in an asynchronous manner.
Development
As we know from the architecture section, there are two main services we have to implement. The UI-Form to accept the HTTP request and to send it to the Kafka topic. And also the Persistence-API to consume the data from the Kafka topic and to write it into the MariaDB database. Both services are Quarkus microservices and can be bootstrapped by the Quarkus cli. For more information and how to bootstrap a new Quarkus application, you can have a look at https://quarkus.io/get-started/.
Form API
Before implementing Java classes, the Quarkus application should have the following additional Quarkus extensions and general dependencies:
quarkus-smallrye-reactive-messaging-kafka
quarkus-resteasy-jackson
lombok
The first class we should implement is the Feedback
class, to define the structure of a rating. In our case, the class has just one attribute of type integer – the rating itself. Additionally there are some supporting lombok
annotations, like the all argument constructor, the no argument constructor, the data
annotation to generate stuff like getters
, setters
and toString
methods. With the given data structure, we can define the FeedbackService
class. The service class provides the method to publish the feedback data to the Kafka topic. In Quarkus, we use the Channel
annotation to inject an event emitter. The application.properties
files defines the connection between the channel and the Kafka topic. So we have a nice abstraction layer between the channel in our source code and the target data streaming platform. The last java class we implement is the FeedbackResource
class. This class provides the HTTP endpoint to receive the feedback from the client and to forward it to our service class. Those are the only three classes we need to implement in our Quarkus microservice which consumes feedback and sends it to a Kafka cluster. With the command quarkus dev
you can run this service locally on your machine and you will first see the great Quarkus developer experience. You don’t have to care about any infrastructure components for your local development environment. Quarkus just downloads a few container images, starts them for you and connects them to your local running application. For more information about the “Quarkus Dev Services” have a look at https://quarkus.io/guides/dev-services.
Persistence API
The persistence api – also a Quarkus application – has a few additional dependencies:
quarkus-resteasy-reactive-jackson
quarkus-jdbc-mariadb
quarkus-flyway
flyway-mysql
quarkus-funqy-http
lombok
The resteasy extension generally includes features to provide a REST interface. The jdbc mariadb extension comes with all resources you need to speak with a MariaDB database. Flyway is a library to manage your database schema, so your application is able to migrate the database schema at startup to a compatible level your application needs. The Funqy extension we use to implement simple serverless functions in Quarkus.
Maybe you will ask yourself why there is no Kafka or messaging dependency. This is because this service doesn’t listen to a Kafka topic directly. Our services should be deployed as a KNative serverless application, so we need an abstraction layer. This service provides just a Funqy function which is triggered by a KNative KafkaSource
object in our Kubernetes. The KafkaSource
and how it works will be explained in the deployment section.
For the persistence layer we implement the Feedback
class by using the javax-persistence-api.
To learn more about the persistence capabilities in Quarkus, have a look at https://quarkus.io/guides/hibernate-orm. Additionally there is a named query defined which is used later in the debug interface to list all the persisted feedback objects. This debug interface is implemented by the FeedbackResource
class. The FeedbackFunction
class includes the Funqy function annotated with Funq
. It receives the feedback from the KafkaSource
and persists it by using an entity manager. To have a deeper understanding about the Funqy framework, you can visit the Quarkus documentation on https://quarkus.io/guides/funqy.
In the migration folder you can see that there is a create table
script to initialize the database schema. To enable flyway, have a look into the application.properties
files. When you start the application by e.g. the Quarkus cli, Quarkus will download the container image for the configured database and automatically migrate the database with the script you provided. This is also the case when you deploy your application on a Kubernetes system for the first time.
Build
To build our two Quarkus microservices, we generally use OpenShift Pipelines with Tekton. For the CICD part, there is an own argo-app for deploying the pipeline resources in the config repository.
Because we don’t want to trigger the build pipeline on every change of our sourcecode manually, we configured webhooks in the GitHub repositories. You can add webhooks in GitHub to your repository at https://github.com/<your-org>/<your-repo>/settings/hooks
.
Now on every change to our GitHub repository, the webhook will be informed about the changes. The receiver behind the webhook is a Tekton EventListener
with its route. A TriggerBinding
maps the received data in the HTTP body to our needed parameters for our build. The TriggerTemplate
uses this mapped data from the GitHub message to trigger a PipelineRun
with the needed parameters. So our general quarkus-app-pipeline
will know which repository and which git-revision it should build. The pipeline itself is quite simple. It
- clones the GitHub repo with the
git-clone
cluster task - builds a native executable by using the
ubi-quarkus-native-image
and the maven command - builds the container image and pushes it to the OpenShift internal container registry
Deployment
The deployment is fully automated by ArgoCD. There are ArgoCD apps for
- The Form-API microservice (KNative service)
- The Persistence-API microservice (KNative service)
- The Form-CICD resources (Tekton resources)
- The Form-Infrastructure resources (Kafka, database)
Every of these ArgoCD apps has a reference to the feedback-form-apps directory (./feedback-form-config/feedback-form-apps
) in the config repository. And those directories contain the Kubernetes resources and custom resources which should be deployed on the cluster. ArgoCD takes care that the resources are always in sync with the cluster dependent on the argo app configuration. In the resource directory you will find some different structures. This is because I decided to use a helm chart to organise the infrastructure resources. For the Persistence-API and the Form-API I use kustomize as the template engine. And for the CICD-Infrastructure resources we don’t need any templating framework. For more information about kustomize have a look at https://kustomize.io/ and for helm you can find out more at https://helm.sh/.
Feedback-Form-API & Feedback-Persistence-API
The two microservices are deployed in a very similar way. In the config’s base directory of the app there is a knative-service.yaml
. This KNative Service definition defines the scaling options, the container-images which should be deployed and the environment variables which should be injected into the container. In the case of the Persistence-API that are the database connection properties which the container needs. The Form-API just needs the Kafka connection properties. The Persistence-API has one additional file: the kafka-source.yaml
. This KNative KafkaSource
listens to the kafka-topic
the Form-API is streaming data on and forwards all the events to the HTTP endpoint of the Funqy function of our Persistence-API. This is the reason why our Persistence Service can also scale as every other KNative application: we have no direct connection or dependency to the kafka-topic
itself. The service is just triggered and scaled by the HTTP requests of the KafkaSource
instance.
Because we want to be able to have different stages of our deployments, there is a kustomization.yaml
file. If we have a look into the dev directory of our config, we will see that the image’s name and tag is replaced. So we can have different versions of the application deployed on different stages. There is also a prefix added to all resources with ‘dev’
and some replacements in some Kubernetes resources we need to have our components ready for the dev stage. ArgoCD notices the presence of the kustomize.yaml
and uses kustomize while applying our resources.
Feedback-Form-Infrastructure
The infrastructure config is managed via a helm chart. The chart includes Kubernetes resources for the MariaDB deployment and for the Kafka instance and topic deployment. For staging we use here the release name feature of helm. For every stage dependent value in our resources, we add the release name as a variable. If we have a look into the feedback-form-infrastructure.yaml
file in the argo-apps directory we will see the helm configurations. The release-name and the value file we should use is referenced by the argo app. And this is the reason why ArgoCD replaces all release name variables with ‘dev’ and selects the namespace from the values.yaml
.
Feedback-Form-CICD
The deployment of the cicd resources is quite simple. ArgoCD takes all the resources in the directory and deploys them as they are without any templating procedures.
Summary
At the end we have
- An API to receive feedback
- A Kafka cluster to stream data between microservices
- An API to persist feedback objects
- A database
- A build pipeline which is triggered on every commit in the git repository automatically
- An implementation of a GitOps approach for our complete application landscape
The two microservices are KNative serverless components which are able to scale to zero which means that there is no resource consumption if there is no traffic. The Kafka instance, the topic and the user are all managed by the AMQ streams operator. The deployment of all Kubernetes resources and custom resources is done by ArgoCD. We have also seen some different ways to organise and template yaml resources with kustomize, helm charts or plain in a directory.
If you want to test the application, you can execute the following curl
command to add a new rating to the Form-API microservice:
curl -X POST -i \
-H 'Content-Type: application/json' \
-d "{\"rating\":\"8\"}" \
https://dev-feedback-form-feedback-app./feedback
And if you want to get the already persisted feedback objects you can do it by this curl
:
curl -i https://dev-feedback-persistence-feedback-app./debug
And now: Have fun rebuilding, developing and learning the project. I am looking forward to your feedback and hope that I could give you some inspiration with this article.