It’s Not Just Performance – InApps Technology is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn It’s Not Just Performance – InApps Technology in today’s post !

Read more about It’s Not Just Performance – InApps Technology at Wikipedia



You can find content about It’s Not Just Performance – InApps Technology from the Wikipedia website

Too often, the debate about running Kubernetes on bare metal versus virtual machines is overly simplistic. There’s more to it than a trade-off between the relative ease of management you get with VMs and the performance advantage of bare metal. (The latter, in fact, isn’t huge nowadays, as I’ll explain below.)

I’m going to attempt to walk through the considerations at play. As you will see, while I tend to believe that Kubernetes on bare metal is the way to go for most use cases, there’s no simple answer.

Kubernetes on Bare Metal Is Faster, But Only a Little

Off the bat, let’s address the performance vs. ease-of-use question.

Andy Holtzmann

Andy is a site reliability engineer at Equinix and has been running Kubernetes on bare metal since v1.9.3 (2018). He has run production environments with up to 55 bare-metal clusters, orchestrated Kubernetes installs on Ubuntu, CentOS and Flatcar Linux, and recently helped accelerate the bring-up of Equinix Metal’s Kubernetes platform to under one hour per new greenfield facility. Andy joined Equinix after working in senior software engineer roles at Twilio and SendGrid.

Yes, VMs are easier to provision and manage, at least in some ways. You don’t need to be concerned with details of the underlying server hardware when you can set up nodes as VMs and orchestrate them using the VM vendor’s orchestration tooling. You also get to leverage things like golden images to simplify VM provisioning.

On the other hand, if you take a hypervisor out of the picture, you don’t spend hardware resources running virtualization software or guest operating systems. All of your physical CPU and memory can be allocated to business workloads.

But it’s important not to overstate this performance advantage. Modern hypervisors are pretty efficient. VMware reports hypervisor overhead rates of just 2 percent compared to bare metal, for example. You have to add the overhead cost of running guest operating systems on top of that number, but still, the raw performance difference between VMs and bare metal can be negligible, at least when you’re not trying to squeeze every last bit of compute power from your infrastructure. (There are cases where that 2% difference is meaningful.)

Read More:   OpenSSF Allstar Draws on Google Expertise to Secure GitHub Code – InApps Technology 2022

When it’s all said and done, virtualization is going to reduce total resource availability for your pods by about 10% to 20%.

Competing Orchestration Layers

Now, let’s get into all the other considerations for running Kubernetes on bare metal versus Kubernetes on VMs. First, the orchestration element. When you run your nodes as VMs, you need to orchestrate those VMs in addition to orchestrating your containers. As a result, a VM-based Kubernetes cluster has two independent orchestration layers to manage.

Obviously, each layer is orchestrating a different thing, so, in theory, this shouldn’t cause problems. In practice, it often does. For example, imagine you have a failed node and both the VM-level orchestrator and the Kubernetes orchestrator are trying to recover from the failure at the same time. This can lead to your orchestrators working at cross purposes because the VM orchestrator is trying to stand up a server that crashed, while Kubernetes is trying to move pods to different nodes.

Similarly, if Kubernetes reports that a node has failed but that node is a VM, you have to figure out whether the VM actually failed or the VM orchestrator simply removed it for some reason. This adds operational complexity, as you have more variables to work through.

You don’t have these issues with Kubernetes on bare metal server nodes. Your nodes are either fully up or they’re not, and there are no orchestrators competing for the nodes’ attention.

What’s Running Underneath?

Another key advantage of running Kubernetes on bare metal is that you always know exactly what you’re getting in a node. You have full visibility into the physical state of the hardware. For example, you can use diagnostics tools like SMART to assess the health of hard disks.

VMs don’t give you much insight about the physical infrastructure upon which your Kubernetes clusters depend. You have no idea how old the disk drives are, or even how much physical memory or CPU cores exist on the physical servers. You’re only aware of VMs’ virtual resources. This makes it harder to troubleshoot issues, contributing again to operational complexity.

Read More:   70,000 Memcached Servers Can Be Hacked Using Eight-Month-Old Flaws – InApps Technology 2022

Using What You Have

For related reasons, bare metal takes the cake when it comes to capacity planning and rightsizing.

There are a fair number of nuances to consider on this front. Bare metal and virtualized infrastructure support capacity planning differently, and there are various tools and strategies for rightsizing everything.

But at the end of the day, it’s easier to get things exactly right when planning bare metal capacity. The reason is simple enough: With bare metal, you can manage resource allocation at the pod level using cgroups in a hyper-efficient, hyper-reliable way. Using tools like the Kubernetes vertical autoscaler, you can divvy up resources down to the millicore based on the total available resources of each physical server.

That’s a luxury you don’t get with VMs. Instead, you get a much cruder level of capacity planning because the resources that can be allocated to pods are contingent on the resource allocations you make to the VMs. You can still use cgroups, of course, but you’ll be doing it within a VM that doesn’t know what resources exist on the underlying server. It only knows what it has been allocated.

You end up having to oversize your VMs to account for unpredictable changes in workload demand. As a result, your pods don’t use resources as efficiently, and a fair amount of the resources on your physical server will likely end up sitting idle much of the time.

Don’t Forget the Network

Another factor that should influence your decision to run Kubernetes on bare metal versus VMs is network performance. It’s a complex topic, but essentially, bare metal means less abstraction of the network, which leads to better network performance.

To dig a level deeper, consider that with virtual nodes you have two separate kernel networking stacks per node: one for the VMs and another for the physical hosts. There are various techniques for negotiating traffic between the two stacks (packet encapsulation, NAT and so on), and some are more efficient than others (hint: NAT is not efficient at all). But at the end of the day, they each require some kind of performance hit. They also add a great deal of complexity to network management and observability.

Running on bare metal, where you have just one networking stack to worry about, you don’t waste resources moving packets between physical and virtual machines, and there are fewer variables to sort through when managing or optimizing the network.

Read More:   A Decentralized Reddit with Self-Moderation and Privacy – InApps 2022

Granted, managing the various networks that exist within Kubernetes, and this partially depends on the container network interface (CNI) you use, does add some overhead. But it’s minor compared to the overhead that comes with full-on virtualization.

Managing Complexity

As I’ve already implied, the decision between Kubernetes on bare metal and Kubernetes on VMs affects the engineers who manage your clusters.

Put simply, bare metal makes operations — and hence your engineers’ lives — simpler in most ways. Beyond the fact that there are fewer layers and moving parts to worry about, a bare-metal environment reduces the constraints under which your team works. They don’t have to remember that VMs only support X, Y and Z configurations or puzzle over whether a particular version of libvirt supports a feature they need.

Instead, they simply deploy the operating system and packages and get to work. It’s easier to set up a cluster, and it’s much easier to manage operations for it over the long term when you’re dealing solely with bare metal.

When Kubernetes on VMs Makes Sense

Let me make clear that I do believe there are situations where running Kubernetes on VMs makes sense.

One scenario is when you’re setting up small-scale staging environments, where performance optimization is not super important. Getting the most from every millicore is not usually a priority for this type of use case.

Another situation is when you work in an organization that is already very heavily wedded to virtualized infrastructure or particular virtualization vendors. In this case, running nodes as VMs simply poses less of a bureaucratic headache. Or maybe there are logistical challenges with acquiring and setting up bare metal servers. If you can self-service some VMs in a few minutes, versus taking months to get physical servers, just use the VMs if it suits your timeline better. Your organization may also be wedded to a managed Kubernetes platform offered by a cloud provider that only runs containers on VMs. Anthos, Google Cloud’s managed hybrid multicloud Kubernetes offering, supports bare-metal deployments, and so does Red Hat’s OpenShift. AWS’s EKS Anywhere bare metal support is coming later this year.

In general, you should never let a dependency on VMs stop you from using Kubernetes. It’s better to take advantage of cloud native technology than to be stuck in the past because you can’t have the optimal infrastructure.

VMs clearly have a place in many Kubernetes clusters, and that will probably never change. But when it comes to questions like performance optimization, streamlining capacity management or reducing operational complexity, Kubernetes on bare metal comes out ahead.

Feature image via Pixabay



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...