How to accelerate your journey to cloud

October 4, 2021

loose coupling to surrounding services to have more flexibility in the actual move of applications. Containerize your applications to have them more portable. Switch your test data as much as possible to synthetic data to have less legal and security hurdles. Standardize and automated your testing approach to have better verification possibilities. Beside that automated QA will be one of the major drivers in your business case.

Summary

In this blog I want to describe certain activities which can help accelerating and stabilizing the way to a public or hybrid cloud approach. As a base I used the Cognizant whitepaper which is a joint collaboration together with Timo Pfahl, Head IT SIX Interbank Clearing AG and Robert Bonomo, Lead QA practice Cognizant DACH and eastern Europe.

Unfortunately there is only a German version available why screenshots in this blog from the whitepaper are in German.

Management Summary

There are certain aspects to push your cloud migration journey. Use loose coupling to surrounding services to have more flexibility in the actual move of applications. Containerize your applications to have them more portable. Switch your test data as much as possible to synthetic data to have less legal and security hurdles. Standardize and automated your testing approach to have better verification possibilities. Beside that automated QA will be one of the major drivers in your business case.

Cloud Migration Accelerators

By doing certain things before you start the core activities around your public / hybrid / multi cloud journey will make things way easier. In the following chapters I want to elaborate some examples a bit more in detail.

Loose Coupling

What it is

Every application has dependencies within the application infrastructure itself (e.g. backend to frontend). But also to surrounding systems and services, as for example an validation service used for an account transaction. The major hurdle in a migration to a different infrastructure is not only to “break” these dependencies, but also to understand them. If you want to go with a staggered migration approach in waves and not a big bang, the understanding of these dependencies is key.

Pic1 – Staging concept with loose coupling and central test data management [1]

In the picture [1] you can see a staged approach of an application lifecycle view which is all but not new. But the difference is that in this approach none of the applications talk to each other directly. They use a middle layer, which is also not new (e.g. Service Bus). But important is the logic behind:

  • 1st case: the respective service I try to call is available. Here I hit the service via the middle layer, the middle layer on-the-fly records my request and later the response coming back from the service. For me nothing changed in this service call, it is the same behaviour. But in the background the middle layer recorded or updated a mock data set.
  • 2nd case: the service I try to call is not available. If I am using an automated failover approach the middle layer is playing a suitable mock back which it was recording bevor. Again, for me it seems like everything would be up and running.

Benefit

More flexibility in the migration, better stability in the regression testing

  • It is quite unrealistic that complete application infrastructures are migrated the same time. So, how to take a subset but keep the situation for all applications like it is? Exactly, via a loose coupling approach like this.
    If we talk about a public cloud approach you may synchronize these recorded mock data between two worlds. Then you need to have the same middleware tooling installed on both sides. Now you can simulate for applications on both sides a setup like they would be still in the same situation like before.
  • Another important aspect is the permanent regression testing capability. If you have something like this in place you want to avoid your tests turning red all the time if an mandatory external service is down. Maybe for specific GUI tests it is even not relevant if you perform them with mock data or real data. Loose coupling and this kind of mocking support helps you to stay very flexible but still very close to the real world and keep your automated testing dashboards representing a state very close to reality.

Containerization

What it is

The topic containerization is maybe not a big surprise, since through the abstraction of an application to a container platform it becomes obviously better migratable wherever this respective container platform can be operated.

Benefit

Better portability, faster migration, unleashes the full potential of cloud’s scalability, supports an easy exit strategy (legal requirement)

Still one benefit is not always very prominent: If you have applications which don’t need much scalability and you want to go for a long term virtual machine in the public cloud it can help to have all these applications in containers. Why is that? Starting discussions with a hyperscaler around discount options it can help as a migration assurance to have the workload already containerized. Since this increases the credibility that it will be a successful migration in time promised according to the contract.

But another benefit is not uninteresting in regards to the portability. If you go everywhere with the same container technology it is way easier to fulfil the following requirement coming for example for Switzerland from FINMA (Swiss Financial Market Supervisory Authority):

“Furthermore, the eventuality of a change of service provider and the possible consequences of such a change must be considered when deciding to outsource and selecting the service provider. […] Provision must be made for insourcing the outsourced function in an orderly manner.” (FINMA Circular 2018/3, margin 18) [6]

In short it must be possible to have an exit strategy within 3 – 6 months. Of course the closer you are to the cloud provider services, the harder the vendor lock-in is, the harder it is to fulfil this requirement.

In other words: the more portable you made your application infrastructure as a homework to go to the cloud with generic tools, the less trouble you will later have with this requirement.

Synthetic Test Data

What it is

High-level we can distinguish between three types of data as you can see in the picture [2]:

  • Synthetic test data: from scratch created test data which has no relation even not a structural one to productive data
  • Anonymized data: client related data is taken and the dependency to the client removed. The hard anonymization removes even structural indicators to the client, where the soft anonymization keeps this.
  • Productive test data: a snapshot of data from production
Pic2 – Test data classification [2]

Benefit

Less security and legal hurdles, accelerator for early pilot phases

Synthetic data is free of almost all legal and security concerns. It is very uncritical to migrate it to the public cloud. Sure, the code still needs to be secured as it is intellectual property. But at the end if you have data which doesn’t need much security attention it helps a lot in the whole story.

This especially is relevant for the early phases of a cloud journey. There you want to drive with less effort and as much as possible less risk pilots and maybe even an MVP (minimum viable product). The MVP requires much more attention since it should cover the stage PROD. Therefore it comes with more challenges around connectivity and security compared to a pilot.

Automated Testing

What is it

The “what” isn’t the relevant question if it is about automated testing more the “how it can help” I want to put into the focus.

First of all the more automation you are driving the higher your reliability and security aspect will grow. Security through hardening through automation [3] means the more left and right side of your core testing you have automated the more reliability and flexibility major changes can be done. This means not only the tests themselves should stay in the focus but also the environment needed to perform them. This means the applications as well as the test infrastructure.

Benefits

An indirect benefit is the acceleration of your automated regression resting through the cloud scalability,

Also this section has two areas of interest if it is about a cloud migration. Let’s assume you go with a rather higher automation level into the cloud you will profit from way more benefits.

Since your test infrastructure would be automated you can leverage the new scalability coming along with your cloud infrastructure. Where maybe a Continuous Integration setup was running tests before for hours on your on premise infrastructure, now in the cloud you can do this in minutes. This means you could think of having more tests running in every scenario a developer changes a line of code (incremental test scenario). An incredible new setup with a huge business benefit is that. Where before a bug was running through the code maybe for weeks / months you would find it now minutes after the code change. You have to consider that the longer a bug stays in the code the more expensive a fix becomes later. This makes this topic the real business case deal.

And nothing in these thoughts is black or white. Running a good mixed setup is key. Some tests you can run fully public, some maybe in a private cloud setup and some still you want to do on premise. The more automation and sophisticated management approach you are driving the more flexible you are. Here topics like multi cluster management [4] or also interesting for DevSecOps purpose is the advanced cluster security [5]. But they are not part of this blog. But at the end this is something you should consider for every test, if it makes sense to have it automated and if yes where to run it.

Pic3 – Automated testing business case considerations [7]

So, in a kind of grooming phase around a new automated test in makes indeed sense to consider the “investment” (one time + maintenance) for the automation and the “benefit” which comes later.

As always the truth lies somewhere in between and not only in the focus of an 100% automated testing coverage.

Summary

At the end of the day it is like with every other transformation journey: the better the preparation homework is done, the smoother the transformation goes. But at the end the switch to a cloud provider isn’t a guarantee for benefits per se. You also need to think about the how you develop to get the most out of it.

Sources

[1] Pic1 – Picture from the whitepaper https://www.whitepaper.ctsapprentice.ch page No.12

[2] Pic2 – Picture from the whitepaper https://www.whitepaper.ctsapprentice.ch page No.8

[3] 5 ways to harden a new system with Ansible, https://www.redhat.com/sysadmin/harden-new-system-ansible

[4] Multi cluster management, https://www.redhat.com/en/technologies/management/advanced-cluster-management

[5] Red Hat Advanced Cluster Security for Kubernetes, https://www.redhat.com/en/technologies/cloud-computing/openshift/advanced-cluster-security-kubernetes

[6] FINMA Circular 2018/3 Outsourcing, source

[7] Pic3 – Picture from the whitepaper https://www.whitepaper.ctsapprentice.ch page No.5