Service connectivity software provider Kong has released Kong Enterprise 2.1 with a variety of new features, but at the heart of those features, explained Kong Chief Technology Officer and co-founder Marco Palladino, is the same theme that has driven the company since its origins: connectivity. Kong was born as an open source project when API marketplace Mashape transitioned from monolith to microservices and needed a gateway that could run in a decoupled and distributed way across its new containerized infrastructure. This latest version, at its core, expands upon Kong’s ability to work in numerous environments and communicate between various types of workloads.

“Without taking care of connectivity, your organizations are not going to be able to successfully transition to modern applications, let alone microservices,” said Palladino. “I like to think of Kong as the Switzerland of connectivity. We are neutral and play nicely with all the other cloud vendors in order to be able to provision this connectivity that fundamentally has no boundaries.”

This neutrality is key to Kong Enterprise 2.1, which introduces Hybrid Mode, allowing Kong users to run the Kong data plane not only across data centers, clouds, and geographies without a central Cassandra deployment, but also providing a central control plane for both virtual machines (VMs) and containers running on those disparate locations and services.

Read More:   Update Beyond Bitcoin: Oracle, IBM Prepare Blockchains for Industrial Use

“When we look at the role of the infrastructure architect, those are the guys that have to provision and support all the application teams that are doing anything within the organization. Some of those things are running on one cloud, some of those things are going to be running on Kubernetes, some of them are going to be running on legacy VMs. So how do we support them all, regardless of the underlying infrastructure? We can more easily support all of these environments by supporting data planes running in each one of these different architectures, clouds, or geographies,” explained Palladino. “Without this feature, what organizations have been doing is using different technologies to manage connectivity in different silos. By providing a hybrid mode, we can reduce that fragmentation, therefore, improve the reliability of how connectivity is being managed.”

Just like how we use Kubernetes to abstract away data center operations, our customers are using Kong enterprise to abstract away connectivity from all of their environments. — Marco Palladino, Kong

Palladino also pointed to Kong Enterprise’s ability to be used as a service mesh ingress and egress with an integration with Kong Mesh, which is built on top of the Envoy-based Kuma service mesh that joined the Cloud Native Computing Foundation (CNCF) earlier this year, as a key development. While Kong had originally set out to build this functionality on top of Istio, another Envoy-based service mesh, Palladino said that it ended up being too difficult, with Istio being too hard to scale and to use, as well as not supporting multiple meshes and treating VMs as second class citizens. By comparison, Kong Enterprise 2.1 provides out of the box enterprise-level support for Kuma, which treats VMs as first-class citizens, is Kubernetes-native, and will integrate directly with the Kong API Gateway.

One final change that Palladino highlighted related to the ability to extend Kong with Kong plugins. Previously, those plugins had to be written in either Lua or C, but Kong has released an SDK that now adds support for the Go programming language.

Read More:   CloudBees Preps for DevOps World and a New Phase of Growth – InApps 2022

Looking ahead, Palladino said that Kong hopes to expand its functionality to help with uptime and expand its role as a control plane.

“We have a suite of machine learning products that today do anomaly detections as well as auto-documentation. We really want our connectivity platform to be the Waze of service connectivity, because we do have data planes running at the edge as a sidecar, and we have data planes running next to every single service in your organization,” said Palladino. “We can also self-heal and keep the uptime over the overall architecture infrastructure, without any human intervention. What we are going to be working on more and more is making sure that humans are not going to be a dependency for keeping the uptime of the overall enterprise.”

The Cloud Native Computing Foundation is a sponsor of InApps.

Feature image by skeeze from Pixabay.