tl;dr
A Kubernetes-native software engineering approach for the development of AI applications helps you increase developer productivity, optimize resource consumption as well as simplify operations. A hands-on demo of this approach can be seen here.
Two-step development approach
The usage of an AI/ML model in an application requires basically a two-step development approach. The first phase comprises everything from obtaining the raw data used for model training up to a readily trained AI model that is offered as service to other applications. During the second phase, the application itself is developed and the AI model service is integrated into it. Quite some effort has been put into optimizing this first phase related to Data Science, for example using MLOps principles as is explained here.
Obstacles of AI application development
Regarding the second phase, which is more related to software engineering, the AI-specific requirements need to be taken into account. Typical obstacles here comprise the choice of the right AI framework as well as the coding itself, the integration of the AI application into multiple deployment environments like development, testing & production, as well as the availability of sufficient compute power and an optimized resource consumption.
Kubernetes-native software engineering
During a presentation at the event “Accelerate innovations with modern applications” on 30.06.2022 we addressed these issues by using a Kubernetes-native software engineering approach for the aforementioned second phase of the AI application development! With Red Hat OpenShift, all stages of the software development life cycle are carried out directly inside the platform using modern DevOps principles and GPU-accelerated computing.
Please be invited to watch the recording of this session, which is aimed at developers of AI applications who want to increase their productivity as well as IT operators who want to learn more about the specific needs of AI applications, covering the following topics:
- Introduction to the Kubernetes-native software engineering approach and its superiority compared to classical approaches
- Hands-on demonstration of this approach, starting by creating a developer workspace and convenient in-browser IDE using Red Hat CodeReady Workspaces (now called OpenShift Dev Spaces) running on Red Hat OpenShift
- Execution of an AI workload on CPU as well as GPU using the NVIDIA GPU Operator
- Offering the AI workload as service using a Red Hat OpenShift application, which is being built and deployed via a CI/CD pipeline using Red Hat OpenShift Pipelines.
Recording of the session
The repo, on which the demo in the recording is based, can be found on GitHub.
Please feel free to contacting us (Benjamin & Manuel) in case of questions, remarks or ideas regarding Kubernetes-native software engineering, AI application development or GPU usage on OpenShift.
One reply on “Develop faster, operate smart: A Kubernetes-native guide to AI application development”
[…] Develop faster, operate smart: A Kubernetes-native guide to AI application development, […]