Knative Applies to Join Kubernetes Community at CNCF – InApps Technology is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn Knative Applies to Join Kubernetes Community at CNCF – InApps Technology in today’s post !
Read more about Knative Applies to Join Kubernetes Community at CNCF – InApps Technology at Wikipedia
You can find content about Knative Applies to Join Kubernetes Community at CNCF – InApps Technology from the Wikipedia website
Google has submitted Knative to the Cloud Native Computing Foundation (CNCF) for consideration as an incubating project, which begins the process to donate the Knative trademark, IP and code to the industry-led organization.
This move is a huge step forward in tying one of the most valuable add-ons for Kubernetes to the community that supports Kubernetes. Moving to CNCF could spark more participation, as it transitions from a Google-led project to an industry-led project. Even so, there have already been many contributions to Knative, including by Red Hat, IBM, VMware, SAP and TriggerMesh.
Kubernetes + Knative = 3
Mark has a long history in emerging technologies and open source. Before co-founding TriggerMesh, he was the executive director of the Node.js Foundation and an executive at Citrix, Cloud.com and Zenoss where he led its open source efforts.
Knative is one of the best ways to provide a development environment on Kubernetes that makes cloud native developers more productive. Developers get faster deployments than they could otherwise achieve using just Kubernetes alone. It provides a framework for building serverless experiences for development, testing and deployment.
Cloud native developers can deploy code faster with Knative, without worrying about scaling or spending time setting up Kubernetes network routes. In addition, organizations can create their serverless environments alongside containerized applications, providing a way for enterprises to maximize the use of their own environments, then decide when and if they move to the cloud.
Knative brings a lot to Kubernetes in the form of reusable components designed to automate everyday operations. Knative can deploy applications from their source code to containers, route traffic between services using modern ingress gateways, scale resources when demand increases or decreases, and forward event streams to any number of cloud providers. In addition, there are few limits on developers, who can use whichever programming language or framework they’re most comfortable using.
Critics point out that Kubernetes has a steep learning curve, especially for developers. But Knative provides a serverless approach that significantly reduces the need for those developing and deploying code to be Kubernetes experts. Operators get consistent management of containers and the flexibility to deploy any image when necessary.
Noted cloud native advocate Kelsey Hightower, of Google Cloud Platform, says, “If Kubernetes is an electrical grid, then Knative is its light switch.” Knative offers better control of complex systems by providing a way to build loosely coupled microservices deployed as serverless functions rather than as monolithic applications. Furthermore, these microservices can be event-driven by activities in virtually any system in the cloud or private data center.
Benefits of Knative in the Private Data Center
With Knative, the benefits that are common for public cloud function-as-a-service providers, like Amazon Lambda and Google Cloud Run, also translate to users in private data centers. Serverless platforms in the public cloud handle the load and execute functions based on triggers. The only hard limit to the number of function instances is the cloud’s capacity.
Serverless deployments are theoretically limitless, but so are their costs too. If too many function trigger events occur, costs can increase dramatically. Enterprises with existing infrastructure investments that want predictable costs or operate at extreme scale, like DropBox, can run their cloud native infrastructure using Kubernetes and Knative.
In Kubernetes, services provide an abstraction for application deployment at runtime. Developers deploy their code without having to worry about whether they’re running inside an Amazon EC2 instance, Google Compute Engine or bare metal Kubernetes in a private data center. They can scale to zero when not in use. Knative simplifies the Kubernetes components needed for managing containerized applications and provides simple ways to scale up and down application instances without the need to manage any underlying infrastructure.
Knative Eventing is a lesser-known component of Knative used for building an event-driven architecture. Through loosely coupled relationships, applications can take action on events across the enterprise. Knative conforms to the CNCF CloudEvents specification, which permits any language to create, send and interact with events. As a result, you can quickly develop applications to respond to events in other systems.
Knative is agnostic of the distribution of Kubernetes as long as it is conformant. It’s also agnostic whether deployed in the cloud or on-premises data centers. In a recent article, noted AWS cloud economist Corey Quinn asserts an “unfulfilled promise of serverless,” pointing out that part of the problem with serverless, specifically AWS Lambda, is a lack of portability. By making your target Kubernetes rather than a single cloud provider, you can solve this problem.
Knative is an ideal example of a valuable part of the stack in a new era of composable infrastructure. Rather than buying a complete stack, you can choose a combination of open source and enterprise software running in various combinations of private data centers and cloud providers. You can consume Kubernetes for free or via an enterprise subscription — Rancher, VMware, or Red Hat OpenShift — or via a cloud provider. You can select a vendor for Knative as well, such as VMware or TriggerMesh. Still, you can maintain your infrastructure on a cloud or on-premises, should you decide to or consume open source software alongside commercial offerings.
Knative has always been open, but its move to the CNCF, with its experience running open source projects in a company-neutral way, will make Knative a bona fide industry-led option for serverless projects.
Open source is a ubiquitous way for delivering today’s cloud native and infrastructure software, from Linux and Kubernetes to solutions at virtually every level of the software stack. However, you shouldn’t just pick an open source option because it’s free; the licensing costs, or lack thereof, are a red herring. The real opportunity is the freedom to use the software in the way you choose and to modify it if you wish.
Many infrastructure enterprise software vendors offer commercial and open source products, giving enterprises the option to select the solution that best meets their needs. This is especially true in the cloud, as Confluent (Apache Kafka), Elastic, Kong, MongoDB and others besides Knative are all offered as open source or enterprise subscriptions.
Open source software offers many advantages, from enabling developers to review code and add significant features without relying upon a third-party company to rework the software to meet your department’s needs. Also, it provides a way for massive collaboration in a transparent, open forum that often moves faster than traditional commercial off-the-shelf software; and things typically move faster in the cloud.
Knative’s Future, Enterprise Owned Event-Driven Architecture
Virtualization has become a foundation for sharing infrastructure, from storage virtualization to hardware virtualization of servers using Xen, KVM and VMware. Infrastructure as a service (IaaS) is common and enabled by virtualization. For a brief time, platform as a service (PaaS) — like CloudFoundry and AWS Elastic Beanstalk — was all the rage. Even the popular container technology Docker was spawned from a PaaS company, DotCloud, and later renamed Docker Inc. However, containers seem to be the right level of abstraction for the application layer and will endure for quite some time.
Containers (persistent and ephemeral serverless deployments) do provide the building blocks for decoupled services to build cloud native applications. In addition, containers can be part of an event-driven architecture consisting of events that offer real-time data flows to create integrations between systems.
At TriggerMesh, we have been a user and part of the development community for Knative. As a result, we have seen the powerful capabilities of Knative Serving for providing FaaS and Knative Eventing for building event-driven applications. We use Knative eventing in much the same way that AWS Event Bridge is used as a serverless event bus to trigger AWS Lambdas and other AWS targets. We also advocated for open source and released the TriggerMesh Cloud Native Integration Platform at KubeCon in October, providing what we hope becomes the de facto event-driven integration platform for Kubernetes.
As you evaluate your options for infrastructure, Knative and Kubernetes are a powerful combination for helping to build event-driven infrastructure that’s open source, flexible and provides a solid foundation for your cloud native initiatives. As a result, operators benefit from a reliable, easy-to-maintain cloud native platform, and developers have a rapid deployment environment for their serverless applications.
InApps Technology is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker, Hightower.
Featured image via Pixabay.
Let’s create the next big thing together!
Coming together is a beginning. Keeping together is progress. Working together is success.