The Cloud Native Ecosystem’s Impact on Linux Kernel Development – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn The Cloud Native Ecosystem’s Impact on Linux Kernel Development – InApps in today’s post !

Read more about The Cloud Native Ecosystem’s Impact on Linux Kernel Development – InApps at Wikipedia



You can find content about The Cloud Native Ecosystem’s Impact on Linux Kernel Development – InApps from the Wikipedia website

For the longest time, Linux kernel development was focused on improving the desktop and server experience. From expanded hardware support, to security, to performance, everything seemed to be highly focused and moving along like a somewhat) well-oiled machine.

But then something shifted in the IT landscape. Seemingly out of nowhere containers and the cloud became crucial to the enterprise. That meant kernel developers had to consider this moving forward. On the surface, the change seemed as though it would be a seamless integration of new tools and a new way of thinking for the deployment of services.

Of course, things aren’t always as they seem… at least on the surface. What should have been a slight change, of course, has become something with far-reaching implications.

Consider Docker Hub

Case in point. Recently, Docker disclosed an unknown party had gained unauthorized access to a Docker Hub database. From this unwarranted entry the attacker gained access sensitive data from nearly 190,000 accounts. That, in and of itself is bad news. User’s information was a risk. Passwords, email addresses, names — information we’d much prefer to keep under wraps.

But consider this: That information also could have given the attacker access to images, uploaded by users. With a few quick commands, those images could have easily been rebuilt with malware, Trojans, and other apps and/or services with malicious intent. Should the right image be reformed and uploaded, thousands (if not millions) of unsuspecting Docker users could have pulled the image down and deployed containers built to steal.

Read More:   The First Cloud Foundry ‘Dojo’ to Foster a Paired Programming Model – InApps Technology 2022

Now we have an issue… one the kernel community has to consider. This is especially true for those kernel developers who focus on security.

John Morello, Chief Technology Officer for container security company Twistlock, had this to say on the Docker Hub issue:

The Docker Hub, which is the conduit to attack some down chain target that they knew was consuming images for Docker Hub, because what you’re talking about doing is basically poisoning the upstream, at least a very far part of the upstream, part of the software supply chain. So, if you, for example, knew that organization x was using particular repositories on Docker Hub and you wanted to penetrate organization x, one mechanism to do that could be to compromise the repositories that they use on Docker Hub, put your implant in there, and then what organization runs these images from Hub, they would be running your implant as well.

“Poisoning the upstream” should send chills down the spine of anyone who works with containers. Because of this, it lands squarely on the shoulders of kernel developers to solve a puzzle that, on the surface, could be unsolvable. How does the Linux kernel prevent such poisoning to the upstream? Or can it?

A blog post, from Joe Fernandes, Red Hat senior director of product management for OpenShift, sums up the containerized process (and how it relates to the Linux kernel) perfectly:

First, each containerized process is isolated from other processes running on the same Linux host, using kernel namespaces. Kernel namespaces provide a virtualized world for the container processes to run in. For example, the “PID” namespace causes a containerized process to only see other processes inside of that container, but not processes from other containers on the shared host. Additional security isolation is provided by kernel features like dropped capabilities, read-only mounts and seccomp. Additional file system security isolation is provided by SELinux in distributions like Red Hat Enterprise Linux. This isolation helps ensure that one container cannot exploit other containers or take down the underlying host.

That type of exploitation could make or break a container deployment. Imagine you’ve just deployed a Docker Swarm or Kubernetes cluster to serve up your company’s signature app or service, only to find the image used was able to exploit the host kernel and bring down your company network or, worse, steal sensitive data.

Read More:   NodeSource Packages its Commercial Node.js Software for Kubernetes Clusters – InApps Technology 2022

Scale and Consistency

The Linux kernel, as an isolated piece of technology, can run on nearly any type of device. From desktops to servers, smartphones to IoT and other embedded devices — the kernel is everywhere.

“As modern development increasingly means cloud native development, kernel contributors are adding features that enhance Linux for these use cases,” Morello said. “Much as virtualization and cloud drove the innovation in technologies like KVM, so too have containers driven development in kernel technologies like seccomp and namespaces.”

But with container and cloud technology, scaling beyond that isolation becomes critical. Morello:

“You don’t really see people on a typical basis, at least, saying “I’m going to build an application in C++ and run it directly as a package on the OS”. That’s not the way that most people are building what you would consider to be a modern application these days, right? You’re building something that’s containerized, you’re going to run that in a container, you’re going to deploy it via something like Kubernetes and the Kernel provides that base layer that abstracts some or all that underlying hardware and all the complexity goes with it across different CPU types and so forth that allows you to run those applications in a really consistent way regardless of what that form factor is and regardless of what the scale of it.”

This (relatively) new technology requires the Linux kernel be capable of delivering a consistent experience across a multitude of different hardware types, configurations, and container deployments of massive scale. That’s not an easy feat.

And speaking of scaling…

The cloud has cemented its place within the technology ecosphere. Hundreds of millions of users depend upon the cloud every single day. Our data lives there. Our photos, our videos, our documents… all the things we do and use to make our daily lives more productive and/or enjoyable.

Read More:   Building Stateful Applications – InApps 2022

What runs the cloud? Linux.

We now see so many companies working with Infrastructure as a Service (IaaS), Software as a Service (SaaS), and Platform as a Service (PaaS). This is yet another arena where Linux shines. However, because these types of services place high demand on servers and networks, the Linux kernel (as well as installed applications) needs to be optimized to perform in such environments.

From their paper “The nonkernel: A Kernel Designed for the Cloud,” a group of researchers from the Israel Institute of Technology and Open University of Israel have this to say:

“In a traditional server, the operating system manages entire resources: all CPUs, all RAM, all available devices. In the cloud, the kernel acquires and releases resources on an increasingly finer granularity, with a goal of acquiring and releasing a few milliseconds of CPU cycles, a single page of RAM, a few Mb/s of network bandwidth.”

It becomes clear a Linux kernel designed for the cloud must be approached quite differently than for traditional use cases.

This paper continues with:

“The first requirement is to allow applications to optimize for cost. On a traditional server, costs are fixed and applications only optimize for “useful work.” Useful work might be measured in run-time performance, e.g., in cache hits per second. In the cloud, where any work carried out requires renting resources and every resource has a momentary price-tag associated with it, applications would still like to optimize for “useful work” — more useful work is always better — but now they would also like to optimize for cost. Why pay the cloud provider more when you could pay less for the same amount of useful work? Thus the cloud kernel should enable applications to bi-objective optimize for both useful work and cost.”

Red Hat and Twistlock are sponsors of InApps.

Feature image by Heather Truett from Pixabay.

InApps is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...