The New Basics of Configuration Management in Kubernetes – InApps Technology is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn The New Basics of Configuration Management in Kubernetes – InApps Technology in today’s post !
Read more about The New Basics of Configuration Management in Kubernetes – InApps Technology at Wikipedia
You can find content about The New Basics of Configuration Management in Kubernetes – InApps Technology from the Wikipedia website
Ofer Idan is VP of AI & Machine Learning at Carbon Relay. Born and raised in Tel Aviv, he formerly served in the Israeli military achieving the rank of Captain. He received a BS in Physics & Mathematics from Technion Israel Institute of Technology and earned a Ph.D. in Biomedical Engineering from Columbia University. After graduating, Ofer joined the Boston Consulting Group, where he focused on strategy and operations in Fortune 500 healthcare organizations. Prior to joining Carbon Relay, Ofer was a data scientist at NavHealth, a healthcare tech startup, where he led the development of ROI models for patient care in value-based health systems.
Against the backdrop of cloud technologies going mainstream, the enterprise IT migration to containerization in general, and to Kubernetes in particular, is well underway. Some organizations are making the move in response to competitive pressures and the need for greater business agility. Others are making the switch for economic reasons; they want more cost-effective IT operations and see Kubernetes as a smart way to get there.
This momentum of this push to Kubernetes is understandable. Its benefits are too compelling to ignore. For IT operations, it makes applications more portable and scalable than alternatives, simpler to develop, and easier, faster and cheaper to deploy. Essentially, Kubernetes enables companies to support their growth and change in nimble, efficient and cost-effective ways.
That’s the promise. But the reality is that DevOps and IT teams in many organizations still can’t quite get their Kubernetes-powered operations to “fly right.”
The reason is the system’s complexity. This stems partially from the flexibility of Kubernetes, which gives teams seemingly endless options and choices. However, that flexibility morphs into complexity as teams initially work to get their clusters up and running. With their clusters up but applications not performing to their liking, teams then try to tune their apps. That’s when they really hit the complexity wall with Kubernetes.
For organizations that are early in their Kubernetes journey, this complexity makes it difficult for their teams to get applications to deploy reliably and have consistently high performance. For enterprises that are further along in their Kubernetes migrations, complexity is what’s preventing them from realizing their anticipated cost savings.
The Old-School Approach Falls Short
As for software products that help teams get over their Kubernetes complexity hurdles, the options have been limited. There’s no shortage of services for deploying Kubernetes clusters, and products for monitoring application performance. But to date, there have been no available solutions specifically designed for optimizing how applications run in Kubernetes environments.
Without software-driven options, DevOps and IT teams have tackled it old fashioned way — manually using trial and error. They change one or two variables, then nervously wait to see the impact. Often it’s unclear why changing “A” caused “B” to break, so they keep on tinkering. For businesses where application performance is paramount, such as with SaaS companies or MSPs, their teams often default to costly overprovisioning.
Configuration management in Kubernetes is a multidimensional chess game, and one that DevOps and IT teams are losing too often.
Hence, the complexity-related problems cited above. Some of these occur at the cluster level, like having to decide how large to make nodes and how many of them to create. But many more problems crop up at the application level.
As an example, let’s look at a web app such as an e-commerce site. Minimizing latency is critical for a smooth user experience, so that is a key consideration. To achieve that goal consistently, the app needs to be tuned properly.
When the app is deployed in Kubernetes, it’s up to DevOps or IT team member to select the number of instances, and choose how much CPU, memory, and other types of resources to allocate to each instance. Allocate too few resources, and the app can slow down or even crash. Allocate too many resources, and suddenly the cloud costs skyrocket. Figuring out the “just right” configuration settings, and doing so quickly, accurately and consistently for a growing roster of apps, is a tall order.
The fact is, configuration management in Kubernetes is a multidimensional chess game, and one that DevOps and IT teams are losing too often. To win, and do so consistently, they need a better way forward.
A Smarter, More Effective Way Forward Emerges
There’s good news for DevOps and IT teams that are presently wrestling with Kubernetes’ complexity. A new, software-driven approach for handling the basics of application configuration in Kubernetes environments has emerged. Powered by advanced machine learning, this new approach eliminates most of this complexity by automatically determining optimal application configuration parameters.
These technologies, which build upon established methods in data science, allow DevOps teams to automate the process of parameter tuning, thereby freeing them to focus on other mission-critical tasks. Using machine learning-powered experimentation, these platforms allow for efficient exploration of the application parameter space, resulting in configurations that are guaranteed to both deploy reliably and perform optimally. As with all powerful ML techniques, the ability to learn over time plays a crucial role in making the process scalable and more efficient. With the help of these technologies, teams can rest assured that development and scaling of their applications will fit naturally into the optimization process, which will become more intelligent with time.
In short, ML-powered approaches for deploying, optimizing, scaling and managing containerized applications in Kubernetes environments are coming into the spotlight. They are proving themselves by intelligently analyzing and managing hundreds of interrelated variables with millions of potential combinations to automatically select the optimal settings for each application.
With our web app example, rather the DevOps team struggling to determine the best parameter values for their app, with these new basics of config optimization, the team gets optimized parameters delivered to them automatically. In addition, the organization and its customers both benefit from a more reliable, high-quality user experience.
It’s all about high performance and reliability with cost-efficiency. By enabling easier and more effective deployment of applications, and ensuring that they are properly resourced and optimally configured, the new, ML-based approach will be a catalyst that creates even more Kubernetes adoption and success. And that’s a very good thing.
Feature image via Pixabay.
Let’s create the next big thing together!
Coming together is a beginning. Keeping together is progress. Working together is success.