Foto Source: Darrel Und (www.pexels.com)
Option: Service Mesh
We have already introduced many different options how to secure micro-service applications based on roles (RBAC). In the last part of this series, we will explore how to use a Service Mesh for RBAC. For the implementation of the Service Mesh we are using the Open Source project Istio.
Overview the Options & Links to Blogs
In Part 1, we’ve provided the context for the whole blog series. We recommend you read it first because otherwise, you miss out on the context.
You find below an overview of the content of these blog series. Just click on the link to jump directly to the respective blog part:
Blog Part | Implementation Option | Description |
(2/7) | HTTP Query Param | This is the most basic module where the “role” is transferred as a HTTP Query Parameter. The server validates the role programmatically. |
(3/7) | Basic Authentication | A user agent uses Basic Authentication to transfer credentials. |
(4/7) | JWT | A JSON Web Token (JWT) codifies claims that are granted and can be objectively validated by the receiver. |
(5/7) | OpenID and Keycloak  | For further standardization, OpenID Connect is used as an identity layer. Keycloak acts as an intermediary to issue a JWT token. |
(6/7) | Proxied API Gateway (3Scale) | ServiceB uses a proxied gateway (3Scale) which is responsible for enforcing RBAC. This is useful for legacy applications that can’t be enabled for OIDC. |
This blog | Service Mesh  | All services are managed by a Service Mesh. The JWT is created outside and enforced by the Service Mesh. |
What do we want to achieve in this blog part?
Let’s first explain what a Service Mesh is:
A service Mesh can be used for many use cases, e.g. application-wide tracing, advanced deployment strategies, dark launches, etc. Role-based Access Control alone would not justify the use of a Service Mesh, but could be applied as an additional benefit. If you want to explore the full potential of a ServiceMesh, please check out the Istio Tutorial on Red Hat Scholars page.
From an architecture point of view, a Service Mesh injects a so-called side-car component which then take care of the above listed use cases. These side-car components communicate with a central control plane, receive instructions and report back data. If it is deployed in a Kubernetes environment, the side-car component is deployed as a container in the same pod as the application component.
For this implementation, we are using the Service Mesh component of OpenShift. This Service Mesh component is installed via an Operator, thus we need to have an OpenShift cluster where an Operator can be installed. The “Developer Sandbox” that we have used in the previous post doesn’t allow this flexibility. Thus, we need to either provision a Managed OpenShift cluster or install an OpenShift cluster on our infrastructure (self-managed).
Prerequistes
We are using the following tools:
- Maven: 3.8.6
- (optional): any IDE (e.g. VS Codium)
- Red Hat OpenShift: 4.12
- Red Hat OpenShift Service Mesh: 2.3
Code Base:
You can either:
- continue from the previous blog and clean up:
- remove all the configuration settings that are related to the JWT and OIDC:
quarkus.oidc.application-type=web-app
quarkus.http.auth.permission.userEP.paths=/*
quarkus.http.auth.permission.userEP.policy=authenticated
quarkus.oidc.auth-server-url=https://sso-skraft-dev.apps.rhoam-ds-prod.xe9u.p1.openshiftapps.com/auth/realms/opensourcerer
quarkus.oidc.client-id=
quarkus.oidc.credentials.secret= - remove the “oidc” extension from ServiceA
- remove all the configuration settings that are related to the JWT and OIDC:
- or clone the code base from here to have a clean start
Implementation
We will explain step-by-step how you can achieve multi-service RBAC with Basic Authentication. If you are only interested in the end result, you can clone this from git here.
Setting up Service Mesh on OpenShift
- Go to the OpenShift Web Console and make sure to be in the “Administrator view”
- There, you should see a menu item “Operators”. Click on “Installed Operators”
- There are no operators yet installed that are required by the Service Mesh. These prerequisite operators will be installed now:
- ElasticSearch Operator
- OpenShift distributed tracing platform
- Kiali
For each of them, go to the “Operator Hub” (in the menu “Operators”) and search for the name, e.g. “Elastic Search”. Make sure to choose the “Red Hat” version of the operator and click on “Install” and again “Install”
After some time, all 3 operators should be successfully installed.
- Now, install the Service Mesh with the operator – exactly the same way as the prerequisite operators.
- Create a project to house the central ServiceMesh components:
- Click in the menu “Home -> Projects”
- Click on the blue botton (top right corner) “Create project”
- Call the project “istio-system” and click on “Create”
- Create the Service Mesh components:
- Go to “Operators -> Installed Operators”
- Click on the “Red Hat OpenShift Service Mesh”
- Make sure that you are in the project “istio-system”
- Switch to the tab “Istio Service Mesh Control Plane” and click on the blue button “Create ServiceMeshControlPlane”
- Click “Create”
- Click on the newly created “ServiceMeshControlPlane”. You can see that there are a lot of resources created and started.
Congratulations! You have successfully installed a Service Mesh on Red Hat Open Shift.
Deploying the services on OpenShift
Now, we need to also deploy ServiceA and ServiceB to OpenShift – with just a little tweak to make them part of the ServiceMesh.
- Create a project in OpenShift to house ServiceA and ServiceB
- We just need to specify that this project shall be managed by the Service Mesh – in other words to include this project to the ServiceMesh Member Role:
- Switch to the “istio-system” project
- Go to “Operators -> Installed Operators” and click on “Red Hat Openshift Service Mesh”
- Switch to the tab “ServiceMesh Member Roll” and click the button “Create ServiceMesh Member Roll”
- Enlarge the sections “members” and enter the name of your project, e.g. “rbac-service-mesh”
- Click on “Create”
- Now, we will deploy the services as usual to OpenShift:
- Make sure that you have a connection to the OpenShift cluster and that you are pointing to the right project
- Add the following settings to the application.properties file:
For ServiceA:
quarkus.rest-client."org.acme.ExternalService".url=http://serviceb
org.eclipse.microprofile.rest.client.propagateHeaders=Authorization
quarkus.kubernetes.deploy=true
quarkus.container-image.group=rbac-service-mesh
quarkus.openshift.deployment-kind=deployment
quarkus.openshift.name=servicea
quarkus.kubernetes-client.trust-certs=true
quarkus.openshift.part-of=service-mesh-demo
quarkus.openshift.annotations."sidecar.istio.io/inject"=true
For ServiceB:
quarkus.kubernetes.deploy=true
quarkus.container-image.group=rbac-service-mesh
quarkus.openshift.deployment-kind=deployment
quarkus.openshift.name=serviceb
quarkus.kubernetes-client.trust-certs=true
quarkus.openshift.route.expose=true
quarkus.openshift.part-of=service-mesh-demo
quarkus.openshift.annotations."sidecar.istio.io/inject"=true
Most of the settings we know already from previous posts. Some are new:
- quarkus.kubernetes-client.trust-certs=true: This is required if the cluster is working with self-signed certificates
- quarkus.openshift.annotations."sidecar.istio.io/inject"=true: This is required in order to flag this component as part of the Service Mesh
We can easily validate whether this has worked out, by checking whether a side-car container has been automatically started in the same pod:
Go to the “Workload -> Pods”- Click on the ServiceA or ServiceB pod
- Scroll down to the “Containers” section:
As you can see, there are 2 containers:- serviceb
- istio-proxy
- Click on the ServiceA or ServiceB pod
- Make sure that you have a connection to the OpenShift cluster and that you are pointing to the right project
- Adding a VirtualService and Gateway to our Service Mesh:
In order to realize the Service Mesh flow, we need to add 2 objects that act as an ingress:- In the OpenShift Web Console click on the + sign (right top corner):
- Copy & paste the following 2 Kubernetes resources into the editor and click “Create”
- In the OpenShift Web Console click on the + sign (right top corner):
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: servicea-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: servicea-gateway
spec:
hosts:
- "*"
gateways:
- servicea-gateway
http:
- match:
- uri:
prefix: /servicea
rewrite:
uri: /
route:
- destination:
host: servicea
port:
number: 80
Code language: PHP (php)
If the creation was successful, you should see the following confirmation screen:
Testing the Service Mesh
Now, let’s test the Service Mesh:
- Accessing the Service Mesh Ingress:
Maybe you have spotted that we have NOT exposed the services via OpenShift routes (as in the previous blog). The reason is that the Service Mesh works with its own ingress – the 2 objects that we have created above (Gateway and Virtual Service):- In the CLI enter:
export GATEWAY_URL=$(kubectl get route istio-ingressgateway -n istio-system -o=jsonpath="{.spec.host}")/servicea - Try to access the Service Mesh ingress:
curl $GATEWAY_URL/serviceA/userEP
This should bring back:
I greet you because you are a user!
- In the CLI enter:
- (optional) Check out the tracing data:
- In the OpenShift Web Console, switch to the project “istio-system”
- In the “Administrator” perspective, go to “Networking -> Routes” view
- Click on the “jaeger” route location
- Login again with the OpenShift credentials and accept to give access
- In the Jaeger GUI, select the Service “servicea.rbac-service-mesh” and “Find traces”
- You get a nice overview about all the traces and can further drill down.
- In the OpenShift Web Console, switch to the project “istio-system”
- (optional) You can also explore the other capabilities of the Service Mesh by opening the Kiali, Grafana or Promotheus GUI.
Particularly, Kiali provides some nice visualization and statistics about the flow.
Activating RBAC for the Service Mesh
Now, we have deviated a bit from our original topic – RBAC. We want to enter our Service Mesh with a JWT and configure the Service Mesh that certain role policies are enforced.
You might already have guessed how this will be accomplished. The code itself will not be touched. All policies and enforcements will happen via Kubernetes objects.
Currently, the access works without any restrictions. Let’s now add a policy that requests a certain role to access endpoints of our services, e.g. the policy for userEP would be:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: rbac-policy-userep
namespace: rbac-service-mesh
spec:
selector:
matchLabels:
app.kubernetes.io/name: servicea
action: ALLOW
rules:
- to:
- operation:
methods:
- GET
paths:
- '*/userEP'
when:
- key: 'request.auth.claims[role]'
values:
- customer
Code language: JavaScript (javascript)
Remark:
- selector-> matchLabels -> app.kubernetes.io/name: servicea:
We are only protecting the entry (“ServiceA”).
- We will use an existing JWT which contains the role “customer”, thus we are using this role as an access condition.
Moreover, we need to configure the ServiceMesh to find the validation location the JWT.
We don’t want to spend too much time setting up the JWT and the associated key sets (JWKS), but reuse some existing ones. If you are interested in all the details, please check out my previous post about JWT.
Let’s just add this object to our namespace to get this accomplished:
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "user-jwt"
spec:
jwtRules:
- issuer: "[email protected]"
jwksUri: "https://gist.githubusercontent.com/lordofthejars/7dad589384612d7a6e18398ac0f10065/raw/ea0f8e7b729fb1df25d4dc60bf17dee409aad204/jwks.json"
Code language: JavaScript (javascript)
Testing RBAC with the Service Mesh
Now, we want to test the functionality.
- You can use an existing token that contains the role “customer”:
token=$(curl https://gist.githubusercontent.com/lordofthejars/f590c80b8d83ea1244febb2c73954739/raw/21ec0ba0184726444d99018761cf0cd0ece35971/token.role.jwt -s) - Now, you can test the different end-points:
- without token:
- userEP: HTTP 403
- adminEP: HTTP 403
- with token:
- userEP: HTTP 200
- adminEP: HTTP 403
- without token:
Conclusion
A Service Mesh is a very convenient way to manage micro-service applications by spanning an overall governance layer. This governance layer can also be used for RBAC.
Advantages:
- RBAC policies are native Kubernetes objects and can thus nicely be managed like other Kubernetes objects of the project (e.g. via gitops)
- The code doesn’t need to be polluted with any annotations or commands
- If there are changes, the application doesn’t need to be redeployed, nor restarted
Disadvantages:
- The policies are defined by the Service Mesh implementation and might face certain limitations (e.g. currently in Istio no regex matching for paths supported – see https://github.com/istio/istio/issues/25021)
- The RBAC rules are only applied at the entry of the Service Mesh (Virtual Service, Gateway) and not for downstream services.