- Software Development
- OAM, the Kubernetes Application Model Bridging Development and Deployment – InApps 2022
OAM, the Kubernetes Application Model Bridging Development and Deployment – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn OAM, the Kubernetes Application Model Bridging Development and Deployment – InApps in today’s post !
Read more about OAM, the Kubernetes Application Model Bridging Development and Deployment – InApps at Wikipedia
You can find content about OAM, the Kubernetes Application Model Bridging Development and Deployment – InApps from the Wikipedia website
Why does Kubernetes need a unified application model? It would be the final piece needed to run a seamless deployment model, one connecting the developer to production, asserted Phil Prasek, a principal product manager at cloud services provider Upbound, in a breakout session at the KubeCon + CloudNativeCon Europe.
With a solid application model in place, a developer could finally build a cloud native application on a laptop, and have it work seamlessly — with no modifications — in a production environment, Prasek said. Without a solid app model, the application deployment workflow gets too quickly mired in complexities, slowing things down.
The mission of the newly-formed Cloud Native Computing Foundation‘s sig-app-delivery Special Interest Group is to provide a standard way of defining the operational requirements for applications running across Kubernetes.
The Open Application Model (OAM), from Microsoft and Alibaba, is one such model. The goals of OAM are twofold, according to OAM contributor Ryan Zhang:
- Provide a standard application context for any microservice platform.
- Define a team-centric model that supports a clear separation of concerns between developers and operators.
A Kubernetes application definition should collect all the variables and configuration settings required to run a cloud native application. Ingress rules, required services and dependencies, security settings, auto-scaling, health monitoring, logging: the app def should capture all these things, so they can be relayed easily — and automatically — to the ops side, according to Prasek.
Helm can be used to template many of these settings through YAML, which then could be packaged into a Container Native Application Bundle (CNAB), in turn allowing the dev to schedule an application through a GitOps-run deployment process, perhaps one carried out by HashiCorp Terraform or AWS CloudFormation.
Even with such a workflow in place, it can still take as much as a month to get a complex Kubernetes app running in a stable way, given the complexities of how the different tools work together.
“Having these things glued together in the imperative pipelines,” as Prasek noted, leads to “multiple management models and multiple representations of state. And oftentimes, these are inconsistent across the different environments. And so it becomes increasingly difficult to master the interactions and the failure modes across all these tools. And that typically results in error-prone deployments and difficulty in just being able to understand what’s going on.”
“It’s definitely a huge time-sink, and not as efficient as it could be,” Prasek said.
Kubernetes CustomResourceDefinitions (and the associated service operators) address this issue of automating configuration, though the separation of concerns (between the dev and the op) is still too narrow, Prasek contended. The next step, Prasek advises instead, is to package the operators as part of a service menu for platform engineers. The services themselves would be published as sets of containers, which the developer could assemble for the backbone of the app. Thus it passes along this exact configuration into production.
“These are all Kubernetes API resources,” Prasek said. “And so I can get version control, I can use to GitOps. I get all the benefits of being able to use the tooling that works with the kernel.”
Providing an example in the session of how an OAM-based could work was provided by Sudhanva Huruli, a Microsoft program manager who works on the Azure Container Compute team, as well as helps maintain the Rudr repository.
He used the Crossplane — an open source Kubernetes-based control plane run through kubectl — to deploy an application into a cluster, via Kubernetes APIs.
In this demo, Crossplane manages infrastructure for the platform operator. It keeps the connection to the backend Azure cloud and exposes resources available to the developers. OAM is layered on top of Crossplane. This sample deployment was a microservices-based application backed by PostgreSQL. Developers can declare a PostgreSQL instance in their apps, and Crossplane will ensure everything is provisioned, Huruli said.
The full talk can be enjoyed here:
The Cloud Native Computing Foundation and KubeCon+CloudNativeCon are sponsors of InApps.
Feature image by skeeze from Pixabay.
Let’s create the next big thing together!
Coming together is a beginning. Keeping together is progress. Working together is success.