From DevOps to DevApps – InApps Technology is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn From DevOps to DevApps – InApps Technology in today’s post !

Read more about From DevOps to DevApps – InApps Technology at Wikipedia



You can find content about From DevOps to DevApps – InApps Technology from the Wikipedia website

Mark Hinkle

Mark has a long history in emerging technologies and open source. Before co-founding TriggerMesh, he was the executive director of the Node.js Foundation and an executive at Citrix, Cloud.com and Zenoss where he led their open source efforts.

When the founding fathers of DevOps — Patrick Dubois, Gene Kim, Andrew Clay Shafer, Damon Edwards, and John Willis — started to espouse the cultural changes in IT they called DevOps, they argued for changes like measurement, sharing, and automation. These changes were needed to bridge the traditionally siloed groups of developers and operations. This movement increased awareness about abstracting infrastructure and automating deployment. Today, it’s an undisputed best practice to automate infrastructure deployments and provide continuous delivery of IT systems.

Cloud providers are baking many of these types of services into their offerings. They are offering fully managed services, not just hosted containers, Kubernetes, and serverless functions. Automated deployment tools like Google’s Anthos provide a way to modernize existing applications and build them cloud natively, essentially systematizing DevOps practices. Backend-as-a-Service (BaaS) like Amazon’s Amplify can almost completely take over the burden of managing infrastructure. Serverless computing (Amazon Lambda, Azure Functions, OpenShift Serverless) provides managed runtimes and autoscaling (scale-up and scale-down) to deploy code as discrete microservices that can be woven together into cloud native applications.

The evolution of highly scalable, low latency automated infrastructure like this is why I believe we are now entering the era of what I am calling (with tongue planted firmly in cheek) DevApps. DevApps gives developers the ability to build applications with the same kind of automation and real-time response that DevOps brought.

Read More:   Applying blockchain in the telecom industry ecosystem

Event-Driven Architecture

One of the key design patterns that is emerging in the cloud native space is Event-Driven Architectures (EDAs). This is a foundation for DevApps. An event is simply a change in a system.

The core idea around event-driven architecture is that these events are shared across multiple systems as messages. To accomplish this, you need the following:

  • Events: These are the messages that describe the changes in a system (many events are specific to the system, but ideally can be transformed into a standardized format such as the CNCF CloudEvents spec).
  • Producers: Systems that produce these events in some format.
  • Consumers: Systems that accept these events and take some action.
  • Brokers: A service that can manage the use and delivery of cloud events (think Amazon Kinesis in the AWS ecosystem). In some cases they may even be sophisticated enough to transform an event matching the scheme from the producer, to something that can be read by the consumer.

In EDAs, events spark a workflow. For example, ServiceNow can produce events that trigger processes in AWS. You can flow information from Zendesk to Snowflake for advanced analytics. This is a well-defined process in AWS itself, where Amazon EventBridge acts as a serverless event bus to route data from one event source to AWS targets. At TriggerMesh, we are working to do the very same thing, but we want to tie virtually any consumer to any producer on any cloud with our cross-cloud event bus.

Batch Versus Event-Driven

Organizations are seeing an exponential increase in the amount of data their systems have to handle, and their traditional RDBMS cannot handle the volume. Processing of information requires batch-based ETL (extract-transform-load) to provide business insights. While batch processing for data warehousing makes sense to obtain historical insights, it is a backward-looking strategy.

Companies seeking to be agile need real-time insights, and that’s where event-driven architectures provide the most benefit. By decoupling business logic from the event-processing, services can exist without being aware of each other. This allows applications to be a combination of services that can exist virtually anywhere. Best of breed services, like mobile services from Twilio, can be used in conjunction with serverless functions running in Amazon Lambda. Or security logs generated in Azure can be forwarded to Splunk for deeper scrutiny. Because these automated systems are interacting in real time, the processes are fluid and support up-to-the-minute action.

Read More:   Update Docker Will Change Hadoop, Making it Easier and Faster

Application Flows

The bottom line is that the reason event-driven application development is growing is because it increases agility and can reduce complexity in application development. This allows data to flow more quickly from application to application (or cloud service), via event messages.

Perhaps an easier way to think about event-driven is to think in terms of application flows. For example, when a trouble ticket is created in Zendesk, that data can be automatically analyzed by Amazon Comprehend to determine what the customer’s sentiment is (angry, satisfied, or confused). Then purchasing history, warranty information, and other pertinent information stored in a data warehouse like Amazon Redshift can be used to give the customer service rep a complete picture of the customer, to more expediently resolve any issues.

One approach to using event-driven architecture utilizes the JAMStack tools, a term coined by Netlify founding CEO, Matt Billman. While WordPress is a platform that is used by an overwhelmingly large number of users deploying websites, the JAMStack is a collection of tools used to deliver web content. JAMStack tools can be used to deploy websites on the edge of the network, by reducing the number of database calls and bringing content closer to the user via CDN. However, you can also extend that stack by adding additional cloud native services, such as AuthO for authentication. In a web app that collects user data, information could be stored in Airtable. Furthermore, when a record is created in Airtable, it could automatically update your subscriber list and add it to an automated program in Mailchimp.

These powerful application flows can replace steps that may have been manually executed in a much slower way in batches of imports and exports, and other steps done by expensive and slower human resources.

Summary

Organizations interested in digital transformation and modernization are moving from monolithic apps to microservices. This allows teams to work on independent services without relying on extensive coordination among other developers. Smaller teams require less coordination and communication is easier. For example: the JavaScript ecosystem, where many thousands of developers provide JavaScript software that is compatible with Node.js via NPM. Or the vast number of plugins that extend the Chrome web browser, to make Chrome not just a web browser but a platform. All of that plugin development happens discreetly and collectively provides a wide breadth of capabilities. Microservices provide the same kind of benefits, albeit within one company.

Read More:   Will Grafana Become Easier to Use in 2022? – InApps 2022

In Fred Brooks’ seminal book on software engineering and project management, “The Mythical Man-Month; Essays on Software Engineering,” traditionally complex programming projects could not be perfectly partitioned into discrete tasks. The reason being that coordination and communication in large products introduced incredible overhead. It’s the same reason that Jeff Bezos and Amazon came up with the famous two pizza team. Realizing that having a team no larger than can be fed by two pizzas, was small enough to reduce the overhead of communication and managing timetables among complex organizations.

By adopting a microservices architecture based on serverless functions and an event-driven architecture, you can take advantage of your own two-pizza rule — where large organizations can logically divide up work and function independently and more quickly. While event-driven architecture and DevApps is not a panacea, it does provide considerable benefits for companies that want to reduce complexity, increase agility, and leverage the robust capabilities of cloud native architecture.

Feature image via Pixabay.




Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...