Escaping the Moving Target Platform Dilemma

August 1, 2022
Tags:

tl;dr To ensure application consistency for distributed (multi hybrid) cloud environments, streamline your target platform from the bottom up. This helps you dealing with the Moving Target Platform Dilemma (MTPD).

As mentioned in the previous article, using the SaaS Kubernetes (K8s) offerings from cloud providers (such as EKS, AKS, GKS) causes your application services to run on diverse underlying platforms you actually either cannot control at all or in an extremely restricted fashion.

To ensure that your services on the different target platforms as equally as possible, the most obvious solution I see is streamlining your K8s operations from the ground.

This implies namely the following parts:

SaaS K8s Opt-out

This is the bitter truth. Especially if you’re already invested into different K8s SaaS offerings.

You either need to get rid of these “native” K8s offerings of the cloud providers.

Or you could streamline to only one provider1.

Alternatively, you could set up a system that at least documents the various software versions related to your target platform. Just for the sake of regulations, this will of course not ensure that your services run consistently.

That said, I assume you got my point and we can start streamlining our K8s target platforms…

Base Approach

The base approach looks basically like this:

Same OS, same K8s, same cluster services, everwhere.

Operating System

As we barely can affect the hardware architecture at the hyperscalers (including the BIOS’es), the initial layer is the operating system.

Though “containers are Linux”2 isn’t technically correct anymore as the Open Container Initiative specification allows other container runtime targets3, you cannot run K8s natively on Windows yet. Thus, I’d definitely go for Linux as the target OS for running K8s4.

The sort of distribution that allows for excessive streamlining is actually an OSTree/libostree based one. The libostree mechanism basically basically makes the OS immutable by applying a Git-like structure to all packages of the OS and “check them out” via hard-links5.
So no package manager, but branches derived from atomic updates for the entire OS.6

A good choice will be https://getfedora.org/en/coreos?stream=stable.

Base Libraries (Incl. Container Runtime)

To prepare the K8s installation78 , you need to make some decisions regarding

K8s Version/Distribution

Needless to say that you need to make a decision regarding the K8s version.

If you do not want to use plain vanilla K8s9, you could choose one of the certified K8s distributions10.

These distributions could facilitate the entire installation and take the burden from you to make the decisions about the underlying platform.11

Streamlined Rollout

To fully automate the K8s cluster rollout, it’s useful to leverage tools such as Ansible or else. To me, this definitely implies practicing infrastructure-as-code (IaC).

A tool that supports IaC in a K8s-native way (by leveraging Git repositories for versioning the target infrastructure), is ArgoCD.

With ArgoCD you basically define your desired target K8s clusters in YAML files, put them under Git version control and let ArgoCD do the rest12.

Outlook

Working with Red Hat, a company that offers open source software for the enterprise, I learned, that this approach actually works for thousands of enterprise customers. Red Hat’s certified K8s distribution OpenShift (open source aka upstream project: OKD) just allows streamlining the entire K8s stack. And actually much more:

Streamline all the stack. The Red Hat approach.

As you can see here, many (many!)13 more components could be involved when it comes to streamlining the AppOps environment (e.g. using Istio or Knative).

But I hope you get the point.

Red Hat’s mission statement. See here.

The nice thing, that serves to Red Hat’s mission (“better technology the open-source way”), is that all these components are not only marketed commercially (Red Hat offers support, not software as it’s open source), but all backed by open source projects.

Either way, a DIY or commercially supported streamlined stack is the way to escape from the Moving Target Platform Dilemma.

(This article was first published on gresch.io).

  1. a scenario that comes with a lot of business implications and a huge single-vendor dependency. I never recommend this approach and a lot of failed projects I came across prove me right []
  2. https://www.redhat.com/en/blog/containers-are-linux []
  3. see this great article here https://iximiuz.com/en/posts/oci-containers/ []
  4. and actually containers, too – my rule of thumb is: only use Windows containers when otherwise technically impossible to operate the workload. And I will not change my mind unless Windows is fully open source. I.e., never. []
  5. comparable to http://linux-vserver.org/index.php?title=util-vserver:Vhashify&oldid=2285 []
  6. Unfortunately, there aren’t a lot of production-ready choices here. Speak to your deb-based distribution dealer to change this situation. []
  7. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ []
  8. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ []
  9. tip: check the issues at https://github.com/kubernetes/kubernetes/milestones []
  10. filter out the cloud based SaaS offerings from the list… https://www.cncf.io/certification/software-conformance/#logos []
  11. Though technically not fully matching, I love the comparison Linux Vanilla Kernel vs. K8s vanilla: Both can’t run on their own (hi GNU ????) and it rarely gives an enterprise a competitive advantage to maintain an individual distribution. In the Linux space, it’s predominantly hardware vendors (NAS/firewalls/routers) who invest into this. []
  12. see this great intro – you actually can do this from a desktop host using kind, but I’m in favor of a separate management master cluster []
  13. did I say: many? []