The datacenter of the future

August 8, 2022

When I try to look into the future, what would I see for a datacenter? What are the requirements the data centers need to comply with? What can a solution look like? In the coming text I will try to give answers to those questions.

Executive summary

In the coming years the usage of services hosted in cloud platforms will still rise. Due to the need to:

  • Increase of flexibility
  • Increased interaction with external service providers
  • Reduction of cost
  • Concentrate on competitive advantages
  • Increase in regulatory requirements

This means that a company has to define its cloud strategy. From my point of view this means at least a Hybrid cloud strategy which encompasses several clouds consisting of On Premise, Saas and other public and dedicated clouds. Concentrating on a single cloud for most companies will not be possible. This will lead to the need to standardize the necessary processes to operate those different environments. 

And then the decision to “Make or Buy” has to be made. Make always implies that there are enough employees available to implement and maintain those new services. Currently it is a difficult market and the question arises if it is not better to focus the available workforce on the competitive advantage and not on services also available as a product.

With OpenShift and its supporting components a standardizing software framework based on Kubernetes is available with the advantages of using a software component with included support and included future enhancements. 

The Hybrid Cloud concept also means that the current way to look at a data center will change. It will evolve into a Service Deployment Center. As in this Center of Excellence you not only need to provide a single on premise environment where the data is the center of gravity but several clouds which host a multitude of services. Those services need to be governed, operated, picked, integrated and negotiated with. 

To have a look at the future of the banking or the insurance industry have a look at my view:

A dynamic world and its repercussions

The world is changing. Focus is on generating a continued and increasing stream of revenue in a market with reduced margins. To foster loyalty and continued interaction with the customers an increased speed of change needs to be implemented. With this comes also an increased need for flexibility concerning the IT infrastructure. 

Changes to the applications need to be implemented quicker. As the availability of experts becomes more and more difficult the need arises to concentrate on business critical parts of the company’s application landscape. Outsourcing by using Saas located in the cloud is one of the possible solutions. But with this comes the need to integrate those services. New contractual challenges, new integration options and new security risks need to be addressed and solved. 

Also with dynamic customer expectations comes an unpredictable need for compute power. In the past the process of enhancing the on premise infrastructure was time consuming and not fast enough. 

The overall answer to this is to use the cloud. 

So the IT infrastructure of a company needs to be more open, more flexible and needs to reduce its cost. The focus changes away from a data centric base of operations to a service centric approach including services on premise, services in the cloud and Saas offerings. This is why we should not define this as a data center but evolve it into defining a service deployment center. This Center of Excellence will include new tasks consisting of governing of all necessary services, interactions with service providers and choosing the right platform to deploy a service. These platform providers will include all kinds of clouds. Public clouds, dedicated clouds, Saas clouds and On Premise clouds will be part of this solution. The main task is to choose the right platform by mapping the service requirements to functional and nonfunctional requirements of the company. For example the needed backend integration, access to data, the needed security constraints and the incurring costs of hosting the service need to be taken into consideration.

The on premise cloud as part of the offerings

Although the definition of the cloud does not exclude an on premise cloud there are some companies who reduce it to only platforms hosted by public cloud providers. The parameter to choose the most fitting platform for a service is not only costs of consumed compute power or flexibility to grow or shrink the needed compute power. Further parameters include:

  • Access to specific high volume datasets
  • Access to specific hardware like the mainframe
  • Necessary interaction to non IT hardware like document scanning facilities or printers to send out paper based documents.
  • Reduced regulatory requirements of an inhouse cloud
    • The market shows that regulated companies move back to on premise clouds as the requirements are easier to fulfill in an on premise datacenter. 
  • Cost of hosting a service which is used 24/7 and has constant and predictable compute power requirements

Depending on the requirements of a service the deployment options vary but also including the on premise cloud as an option.

Finding the needle in the haystack or the elephant in the room?

So, integrating a multitude of services is coming. But why not use a single cloud and deploy everything there? Or is it better to find the best fit to deploy a service?

To deploy everything to a single cloud reduces the cost of integration, it reduces the security challenges, and opens up the use of cloud specific services and APIs. Creation and deployment of instances and service within have a reduced complexity. Also with the higher volume of transactions and usage comes a lower price per transaction.

On the other hand choosing the best of breed concept allows you to concentrate on the services and not their environment. There will be services which need a specific provider which cannot be used when this is not available in the chosen environment. Regulatory requirements might force you to have an exit strategy. Saas offerings might only be available for the chosen cloud. And lastly you reduce the lockin to a single provider and reduce the side effects of a monopol. 

So from my point of view any company will have more than one cloud provider. And the disadvantages of this need to be solved. Several tasks need to be completed when interacting with the different platforms:

  • Creation, destruction and billing of instances
  • Implementation of provider specific deployment processes 
  • Enforcement of security constraints
  • Networking configurations to other services hosted on other clouds
  • Data access in the different platforms
  • Logging, Monitoring and debugging of Services on different platforms.

Here again the base decision has to be made. Make or Buy.

A consolidating platform will help to ease those tasks. Kubernetes is a starting point for this, but leaves several tasks open. With OpenShift and the Hybrid Cloud concept more of those tasks are already standardized. And building on that with Red Hat Advanced Cluster Management (RHACM) you get several more options to integrate different platforms. When looking into the future it is also planned* that a Multi Cluster management feature will be implemented. Have a look at this video from the OpenShift Commons event ( https://www.youtube.com/watch?v=OF0C6DSooYQ ) for further details. This will enable dynamic deployment to different clouds, monitoring the deployment using RHACM as the unifying component. 

Now assume that the ability to deploy services to different cloud platforms is available. What would the next step be?

Instead of looking at each service/application as a monolith the services can take advantage of the features of different clouds to perform faster and more cost effective. For this the service needs to be decompiled or partitioned into logical units. For example for a fraud detection service you need access to specific data but also an abundance of compute power. Within a containerized platform microservices** can be used to implement such partitioning. Using the flat network planned* to be implemented within the usage of RHACM the microservices do not need to be aware that other parts of the services are on other platforms. Microservice A accesses the minimal amount of data needed and transfers that to microsservice B which then evaluates the risk of fraud by using the compute power of a specific platform. 

So as a second step the application design will have to follow the cloud strategy of the company.

Legacy Apps: OMG what do we do with those?

The path into this new world is not a big bang. Each company has applications residing on legacy virtual machines and might never be migrated to a container based application/service. It is essential to also find a way at least during the migration phase to still be able to access those services. A common containerized model to host all applications, even those inside virtual machines is essential to reduce the burden of managing services. With OpenShift Virtualization*** an on premise cloud can be used to host those images and still be able to easily monitor, operate and access them. 

But there are other kinds of legacy apps especially in the financial industry like applications running on the mainframe. Those applications are platform dependent and are unlikely to be migrated to a cloud platform away from the plethora of data they usually need access to. With the planned* multi cluster approach those applications can still remain in an on premise cloud but can be enhanced by services/parts running on other platforms.

Summary

To summarize in a hybrid cloud environment which from my point of view no company can avoid, the Service Deployment Center will supersede the Data Center. Focus will be to categorize the available services, find a cost effective fit of the hosting platform, supply the base methods to interact with the platforms and set the base requirements on subjects such as security and regulations. The current implementation of a data center will be one of the providers for a platform with a lot of parameters pointing to an on premise instance but with the increased need to be flexible and cost effective using on premise cloud technology.

*= Red Hat can without notice remove or change future planned features and functions and their timeline.

**= Have a look here to get further details on Red Hats implementation of a Service Mesh https://www.redhat.com/en/technologies/cloud-computing/openshift/what-is-openshift-service-mesh

***= More details on OpenShift Virtualization https://www.redhat.com/en/technologies/cloud-computing/openshift/virtualization