• Home
  • >
  • DevOps
  • >
  • Supercharging Event-Driven Integrations using Apache Kafka and TriggerMesh – InApps Technology 2022

Supercharging Event-Driven Integrations using Apache Kafka and TriggerMesh – InApps Technology is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn Supercharging Event-Driven Integrations using Apache Kafka and TriggerMesh – InApps Technology in today’s post !

Read more about Supercharging Event-Driven Integrations using Apache Kafka and TriggerMesh – InApps Technology at Wikipedia

You can find content about Supercharging Event-Driven Integrations using Apache Kafka and TriggerMesh – InApps Technology from the Wikipedia website

Event-driven integrations give businesses the flexibility they need to adapt and adjust to rapid market and customer preference changes. Apache Kafka has emerged as the leading system for brokering messages across the enterprise. Adding TriggerMesh to Kafka provides a way for messages to be routed and transformed cloud natively across systems. DevOps teams, like the one at PNC Bank, use the TriggerMesh declarative API to define these event-driven integrations and manage them as part of their CI/CD pipeline.

Event-Driven Architecture Basics

Sebastien Goasguen

Sebastien is co-founder and chief product officer of TriggerMesh.

Many modern applications are rapidly adopting an event-driven architecture (EDA). An EDA is used to loosely couple independent microservices and provide near real-time behavior. When coupled with a cloud native mindset and the use of containers and serverless functions, EDA modernizes the entire enterprise application landscape.

Over the last 10 years, starting with the DevOps movement, great emphasis has been put into gaining agility and reducing the time to market for new applications. Racing from development to production is seen as a true competitive advantage. With this in mind, breaking monolithic applications into microservices has been seen as a way to deploy independent services faster, giving each microservice its own lifecycle. Packaging each microservice and managing it in production gave rise to the container and Kubernetes era. However, coupling each microservice is still an open problem and that is where EDA comes back in full force. When you adopt an event-driven architecture, you can couple together your independent microservices through a messaging substrate like Kafka; and so gain the agility and scale you have been looking for.

Decoupling Your Application

Decoupling the components of an application into microservices enables them to be deployed independently of each other, meaning that they now have a separate lifecycle — they can be developed, packaged, tested and deployed through separate CI/CD pipelines. The advantage of this is that developers can revise their own system without needing to change any logic in any of the other microservices that make up the entire cloud native application. Essentially, loosely-coupled microservices are the libraries of cloud applications, with the benefit of them not having to be recompiled into a monolithic application.

EDA and Containers

Event-driven architectures consist of three main components: producers, consumers and brokers. The producers send a message to the broker when an event occurs (e.g. an update to a database). Consumers receive an event from a broker and take some action (e.g. runs a serverless function that does some type of ETL operation on the database).

Read More:   Why Unit Tests Aren’t the Only Answer to Continuous Delivery – InApps Technology 2022

The difference between a message and an event can be confusing at first. An event is a notification that a state has changed. A message, however, contains additional information that represents more than just a notification. There is additional data associated with a message. Eventing is like a phone call, but it doesn’t tell you who is on the line or what their message is. A message provides the details of the call — e.g. who called and a transcription of what was discussed.

Producers aren’t affected by how the events they produce are going to be consumed (so additional consumers can be added without affecting the producers). Consumers need not concern themselves with how events were produced. Because of this loose coupling, microservices can be implemented in different languages or use technologies that are different and appropriate for specific jobs. This means that containers are the perfect packaging mechanisms for microservices; and in our EDA context, containers are the perfect packaging for producers and consumers of events and/or messages. Increasingly, we will see that cloud native applications managed in Kubernetes will be a set of producers and consumers of events with a Kafka messaging substrate, running in Kubernetes or in a Cloud service like AWS MSK or Confluent.

CloudEvent Specification from CNCF

Imagine now that these messages are provided by different services, whether they are cloud services or on-premises applications. That makes it difficult for systems to understand messages from one system to another. Not only do you need the transformation of messages, but you also would need to have a common understanding about the metadata of the messages.

That’s where a standard, or at least a specification, comes into play. CloudEvents 1.0, a specification championed by the Cloud Native Computing Foundation (CNCF)  provides a common way for cloud providers to express events. The spec says that an event is expressed in a specific format and that data needs to have certain fields. So, it needs to have a timestamp, a subject, a source, and a type.

There are an increasing number of systems that are implementing event-driven architecture. Cloud providers see this trend and they all provide a messaging substrate with their own specific features. Amazon Web Services (AWS), for example, provides Kinesis for processing event messages in real-time. Other solutions, like the open source distributed event platform Apache Kafka, can integrate with virtually any system and be deployed on any cloud or on-prem. Popularity of Kafka (and its commercial version, provided by Confluent) has grown rapidly since its development at LinkedIn. It is now in use in most of the Fortune 100, as it starts displacing enterprise service buses like Tibco or Mulesoft. The reason for this growth is because Kafka provides a way to provide a highly scalable and real-time message stream, to share events that can be used to power event-driven applications across the cloud and the enterprise.

Supercharging Kafka

Kafka is an exceptionally good system for brokering messages and supporting EDA, but that is only the first part of the equation. At TriggerMesh, we have discovered that providing a way for those messages to be routed and transformed into more meaningful events — that can be exchanged cloud-natively — is extremely valuable. As Kafka gains popularity, the ability to do more sophisticated things with those real-time event streams is rising. The ability to consume, route and transform event streams into useful messages (not just from Kakfa, but all cloud providers) is the key to long-term success. What we mean by dealing with events cloud natively is that the event flow that happens in your application needs to be described with a powerful declarative API. Kubernetes has shown us how to manage applications at scale with a declarative mindset; and doing it for EDA is the way to “supercharge” Kafka.

Read More:   Update Splice Machine Hybrid SQL System Fuses Transactions and Analytics

GitOps Meets EDA

Imagine being able to define your event producers, consumers, your transformation, your event stores and your routing tables with a declarative API. You would be able to use the same DevOps mindset with your own Event-Driven Applications that you adopted to decouple your monolithic application. This means that in your version control system, you would have the representation of your event flow; and that any change of the declared state of the EDA would be automatically reconciled in your live system.

One example is our customer PNC Bank, which is using Apache Kafka. The bank’s project team events or messages coming from all sorts of different sources, like Jenkins and Bitbucket. They push every message to Kafka. They saw a need for a cloud native integration platform like TriggerMesh to add meaning to events, while leveraging the event streaming capabilities of Apache Kafka under the hood. Reason being, they had a set of microservices that needed to be triggered on-demand when certain events happened. TriggerMesh provided them with a declarative way to define their event-flows without having to go deep into the Kafka configuration: no Java coding, no Kafka connect configuration, no specific language SDK to produce or consume messages. They adopted the CloudEvent specification, their microservices simply consumed and produced cloud-events, and TriggerMesh provided the wiring with a declarative API that allowed them to keep using their GitOps pipeline.

PNC Bank understood the advantage that this API-driven mindset of integration fits well with their DevOps groups, because they can manage the integration the same way that they manage their microservices application. The TriggerMesh declarative API was easy to integrate by the DevOps team into their pipelines.

Conclusion

TriggerMesh abstracts event brokers, event sources and event sinks. For brokers, you can swap whichever message streaming technology you want. You can use Kinesis, Kafka, Google PubSub, NATS, and/or others.

TriggerMesh harnesses the events flowing at the underlying broker and extends them, so they are ready to use for new scenarios (e.g. real-time streaming from the cloud providers, Apache Kafka, or Enterprise Service Buses). The use case we run into most often is that TriggerMesh provides a way to extend open source Apache Kafka. With Kafka Connect, there’s much more low-level sysadmin work and installation configuration. Whereas with TriggerMesh, since it is fully API-driven, the developer stands back and directly interacts with an API and defines a desired state.

Read More:   Update How MemSQL Enables Exactly-Once Semantics with Apache Kafka

Feature image via Pixabay.

Source: InApps.net

List of Keywords users find our article on Google:

apache kafka
kafka kubernetes
kafka connect
tibco jobs
apache kafka jobs
kafka version
triggermesh
kafka as a service
confluent kafka
confluent control center
aws msk
serverless kafka
kafka connect configuration
apache kafka connect
kafka connect security
what is kafka connect
apache kafka security
amazon kinesis monitoring
event-brokers
kinesis monitoring
“triggermesh”
hire aws kinesis developers
kafka connect pricing
kafka icon
pnc customer service jobs
kafka clients
hire kafka developers
kinesis aws
confluent kafka vs apache kafka
triggermesh saws
tibco cloud integration pricing
hire tibco developers
kafka saas
kafka jobs
apache kafka in action
kafka admin
jobs at pnc bank
confluent platform
kafka streams logo
tibco streaming
kafka connect msk
event driven architecture mulesoft
facebook groups integrations
aws kafka
mulesoft integration developer
apache kafka java api
aws kinesis observability
kafka cloud
kafka connect and kafka streams
tibco developers
kafka deployment on kubernetes
kafka in kubernetes
mulesoft basics
kafka java example
monitor aws kinesis
kafka aws
kafka broker
kafka java
kafka etl
kafka ui
ui for kafka
aws kinesis monitoring
kafka message key
apache kafka monitoring
kinesis integration
apache kafka broker
“confluent” kafka or apache
tibco linkedin
confluent kafka icon
pnc bank wikipedia
apache open office wikipedia
tibco blog
mulesoft facebook
anylogic api
kafka gitops
apache kafka clients
pnc wikipedia
deep adjust 1.0
wiki kafka
“confluent”
kinesis wiki
mulesoft and kafka
msk linkedin
apache kafka wiki
apache phone number
hire apache kafka developers
tibco etl
tibco vs
kafka developers for hire
kafka service mesh
cloud events spec
spartak.msk
eventscloud.com
mulesoft cicd
apache kafka versions
what is confluent platform
apache kafka version
apache labs
confluent kafka connect configuration
tibco be jobs
anylogic example models
confluent kafka version
kafka monitoring done
on premise deployment mulesoft
cncf cloud events
mulesoft templates
etl tibco
is confluent a good company
platform event triggered flow
what is confluent kafka
confluent kafka go example
eda 50k
how to find out kafka version
what is client id in kafka
apache service mesh
cloud apache kafka
confluent-kafka version
hire kafka developer
kafka google
aws equivalent of kafka
apache kafka client
aws kinesis
kafka latest version
triggermesh cloud
apaches youtube
aws kafka connect
pnc bank api
apache timestamp format
aws confluent
aws msk pricing
confluent connect
kafka with kubernetes
msk aws
mulesoft installation
tibco developer
tibco now 2022
aws app mesh
aws msk kafka version
kafka kubernetes in production
application packaging jobs
cloud on apache kafka
bitbucket api 1.0
kafka ui manager
mulesoft deployment
tibco business events
aws kafka connector
confluent golang kafka
kafka streams example java
mulesoft devops engineer
what is confluent
apache kafka cloud
confluent kafka go
kafka manager kubernetes
learn kafka connect
msk aws kafka
apache kafka kubernetes
apache open office
cloud kafka
clouds with apache kafka
difference between confluent kafka and apache kafka
kafka sink
Rate this post
Admin
Admin
Content writer

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Let’s Create the Next Big Thing Together!

You can reach us anytime via sales@inapps.net

    You need to enter your email to download

      Success. Downloading...