Overcoming Kubernetes’ ‘Day 2’ Challenges – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn Overcoming Kubernetes’ ‘Day 2’ Challenges – InApps in today’s post !

Read more about Overcoming Kubernetes’ ‘Day 2’ Challenges – InApps at Wikipedia



You can find content about Overcoming Kubernetes’ ‘Day 2’ Challenges – InApps from the Wikipedia website

Stephan Fabel

Stephan Fabel has ten years of hands-on cloud architecture and product management expertise: starting with running one of the first production OpenStack data centers at University of Hawaii; scoped, designed and managed global cloud implementations at major customers including Apple, Verizon and SAP; led product management process for hybrid cloud monitoring tools based on needs identified at largest cloud customers. Stephan’s role at Canonical is Product Manager with a focus on Containers, LXD (Machine Containers) and Platform as a Service.

Survey after survey has shown Kubernetes’ soaring popularity as a tool for automating the deployment and management of containerized applications in the cloud. However, it’s also becoming clear that Kubernetes is still an evolving technology and is experiencing some growing pains as it matures.

Kubernetes was designed in 2014 by two Google engineers, Craig McLuckie and Joe Beda, and — despite more than four ensuing years as an ever-updated open-source project — Kubernetes still often feels as though it was built for other engineers rather than the IT masses.

It will be important in 2020 and beyond for the open-source community and vendors to simplify and refine the Kubernetes stack with more tools that automate the management of the platform. Otherwise, disillusionment with Kubernetes’ challenges could threaten to derail the technology’s momentum.

Kubernetes was designed in 2014 by two Google engineers, and — despite more than four ensuing years as an ever-updated open-source project — it still often feels as though it was built for other engineers rather than the IT masses.

These challenges represent an opportunity for vendors that can help an organization properly design and build Kubernetes clusters based on its specific workload needs and even take on the ongoing monitoring and management.

Read More:   What Architects Should Know about Zombie Code – InApps Technology 2022

Why is Kubernetes experiencing some growing pains? Tales from the IT trenches reveal a common theme: Kubernetes can be too hard to set up and manage.

The issue isn’t Kubernetes in its simplest form: a CNI (container network interface) tasked with putting newly created containers on servers and allocating network interfaces for them. But Kubernetes gets tricky when more and more is asked of it as a platform for clustered applications composed of multiple microservices.

Custom Resource Definitions

For example, take CRDs (Custom Resource Definitions), a feature introduced in Kubernetes 1.7 that allows users to add their own custom objects to the Kubernetes cluster and tells it how to act when certain API calls come in.

While the CRD is a powerful capability, it adds complexity by essentially turning Kubernetes clusters into extensible versions of themselves. That requires a whole new set of considerations beyond the CRD: validation rules, a controller, a way to deploy all this code to the Kubernetes cluster.

This isn’t the only example of Kubernetes requiring extra work to make it ready for prime time in an enterprise production environment. Others include having to create a cloud network load balancer to send traffic to the correct port on cluster nodes, monitoring tools, and integration into the build pipeline.

These processes, by and large, are not yet automated and require custom coding.

I hear many customers complain that while their Kubernetes pilots ran smoothly and were successful, problems arose on “Day 2.” They’ve found it’s easy to break an app when upgrading a Kubernetes cluster in production. That’s because this work requires a heavy amount of workload transfers as a node is isolated and repopulated with containers.

Popular Kubernetes-related platform components have emerged, such as Istio, which intelligently controls the flow of traffic and API calls between services, and Linkerd, a light service mesh designed to provide observability and reliability without coding changes. However, these tools tend to have a single focus and don’t necessarily play well together.

Read More:   Enhance Motor Control with Positional Feedback – InApps 2022

Security patching is often a manual process rather than an automated process in the background.

Tools for storage management, network management, and resource definition are not yet mature.

And as also is the case with OpenStack and VMware, it’s complicated to integrate Kubernetes with the single-tenant, bare-metal servers that many enterprises still use.

The major public cloud providers have been stepping in with service mesh offerings aimed at more easily controlling and monitoring containerized applications at scale, but many customers are confused about how they work and what differentiates one from the other.

The Road Ahead

So, as you can see, Kubernetes is a powerful technology that rightfully is becoming a de facto standard for container orchestration in the cloud. But as the ease-of-use-and-management concerns show, it’s still a relatively young technology that needs to become easier to use.

I’m confident that the Kubernetes community and the many vendors that have a stake in the technology will work hard to make that happen. In the meantime, though, many organizations have decided they don’t want to operate Kubernetes themselves and are choosing partners to do it for them. Why bulk up on in-house expertise to deal with Kubernetes complexity and waste cycles on infrastructure rather than the core business?

A trusted partner can handle a range of work, such as ensuring seamless migration to the latest Kubernetes releases, easily integrating open-source logging, monitoring, storage, networking, and container runtimes, and full monitoring and management of Kubernetes clusters.

Important emerging technologies often experience growing pains, and Kubernetes is no different. Fortunately, cures are available.

Feature image via Pixabay.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...