• Home
  • >
  • DevOps News
  • >
  • Growing Adoption of Observability Powers Business Transformation – InApps 2022

Growing Adoption of Observability Powers Business Transformation – InApps is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn Growing Adoption of Observability Powers Business Transformation – InApps in today’s post !

Read more about Growing Adoption of Observability Powers Business Transformation – InApps at Wikipedia



You can find content about Growing Adoption of Observability Powers Business Transformation – InApps from the Wikipedia website

Enterprises have been monitoring application performance for almost as long as we’ve had applications. But recent development trends — including the move to hybrid infrastructure, containers, microservices and, most recently, serverless functions, IoT and edge computing — make it difficult for legacy tools and approaches to keep up.

Observability bridges this gap, bringing advanced analytical capabilities to the standard set of metrics, logging and tracing to applications running in a wide variety of modern environments, where finding patterns among millions or billions of transactions that traverse complex architectures is a necessary part of daily life.

According to a recent survey of 405 organizations by ClearPath Technologies published by observability vendor Honeycomb, 61% of those surveyed are now practicing observability, an increase of 8 percentage points over last year.

Observability Adoption Spreads

The respondents to this survey were a self-selected group, people who care about observability. But other industry experts are also seeing an increase in interest.

For example, according to the IT analyst Gartner, 30% of enterprises implementing distributed system architectures will have adopted observability techniques by 2024, up from less than 10% in 2020.

And according to a recent VMware survey of IT practitioners, 16% are already using observability tools and 35% are planning to implement observability in the next six to twelve months.

Instead of using new cross-platform observability tools, most companies use traditional logging tools, 52% use specialized container monitoring tools, and 42% use application-performance monitoring tools.

But today’s environments are orders of magnitude more complex than in the past, and software-development cycles are dramatically shorter. As a result, 96% or survey respondents reported problems with their current approach.

According to James Governor, analyst and co-founder at RedMonk, an industry analyst firm, traditional approaches fail in three main areas.

First, and most obviously, traditional tools have trouble keeping up with cloud and distributed environments.

Second, they have trouble keeping up with the pace of modern software-development cycles. Companies want to push code changes out faster than ever before, and be increasingly agile in their choices of platforms and deployment strategies.

Finally, companies are looking to add context, analytics, and other value-added services to their monitoring platforms, he said.

Pathways to Observability

Enterprises have four main options when it comes to moving toward observability:

  • Wait for their existing, legacy tools and vendors to offer new tooling.
  • Build their own, usually based on open source components.
  • Use the tools offered by their cloud providers.
  • Or go with one of the new breed of emerging, cloud native observability vendors.

Waiting for legacy vendors can be problematic, because their business models are often predicated on high data-storage costs, Governor said.

“The new entrants are built on Amazon, storing these things in S3 buckets,” he said. “It’s going to be extremely cheap. They can offer a new value proposition. That means you can do more in terms of event storage, management, and analysis.”

Read More:   Update R Server 9 Adds Machine Learning to Work with Your Data Where It Lives

This is harder for traditional vendors to do because of economics.

“If you are built at a particular price point, and you build a business on that, and you build a sales force on that, it can be difficult to change,” he said.

Plus, new entrants are able to take advantage of infrastructures that were not available to companies launched 20 years ago, or 10 years ago — or even four years ago, he added. “The entire cloud infrastructure market is about an easier way of doing things.”

Some traditional vendors are building new tools, or buying up startups that have them.

For some enterprises, it makes sense to wait for their existing vendors to add observability to their offerings.

“Maybe you rely on that vendor for a significant part of how you manage your systems and you’re not in a position to jettison that,” Governor said. “But if you’re modernizing your operations and how you build your software, you will need tools that fit your new way of working.”

Otherwise, he said, a legacy vendor that’s not moving quickly enough might become an obstacle to change.

Rather than waiting, many enterprises are opting to roll their own tools, he said. However, Governor added, “It’s just not necessarily a good idea. Managing your own infrastructure sucks.”

Another option, which is often the easiest for enterprises that are just starting to move to cloud infrastructure, is to use the tools offered by their cloud providers.

However, these tools often lag behind, Governor said. And cloud platform providers typically only offer tools for their own platforms — and many companies are moving to multicloud.

“Most organizations are using third-party tools,” he said.

More Mature Companies Pull Ahead

As companies move to modern observability platforms, they start to see benefits in terms of performance and availability, and the benefits accelerate as companies move along the maturity curve.

“Organizations that get better at building and managing software keep getting better at it, and pull away further from people who don’t,” Governor said. There’s a compounding effect that happens, he added: “As you get better at it, you move faster and the number of bugs go down. If you’re an elite performer, people will want to work for you, and you can pay people more money, and you can do things that your competitors cannot.”

New research bears this out. According to the Honeycomb survey, the more advanced a company is in its use of observability technology, the more benefits it sees.

Teams that have adopted observability are more confident in catching bugs both before and after production. In addition, 70% said they can understand what is happening inside the systems at any time, 69% said that they can immediately identify the problem when something breaks and understand the impact it has on other systems, and 51% said that they can immediately identify the solution to a problem.

“As they get comfortable with the new tools, as they start to get used to the workflow, they start finding new ways to apply the new muscles they’re building,” said Christine Yen, CEO at Honeycomb.

And modern observability platforms can provide more functionality than ever before. With the ability to collect data on a more granular level, across more systems than ever before, observability can now be the basis of advanced analytics, machine learning and artificial intelligence projects.

“We’re no longer in the world where the data can be managed on a human scale,” Yen said. “Ten or 20 years ago, logs were still meant for humans to read and explore. Systems were smaller. There was less scale. We’re no longer in that world. All our data is machine-generated and we need machines to help us interpret it.”

Read More:   Jupyter Notebooks Could Help SREs Better Sleuth Incidents – InApps Technology 2022

The reach and power of modern observability tools makes them attractive to people beyond the core operational teams, she added.

Observability Goes Beyond Operations

At first, observability tools are typically of most interest to operations engineers, who care about uptime and service performance. Their need to resolve production issues quickly is what prompts them to find alternative solutions to traditional monitoring.

“But the pendulum is swinging more to the folks who are crafting the code in the first place, to the developers and the entire engineering team,” Yen said. “And that’s where we start seeing the virtuous cycle.”

By having full observability, the feedback loop is shortened and software developers can code faster and more effectively, she said. It starts with resolving incidents, but then teams realize they can now answer all sorts of questions that they couldn’t even start to ask with their old traditional tools.

Finally, modern observability platforms can collect data that’s relevant to business users. For example, in addition to collecting process IDs, they can also collect transaction IDs or customer account numbers. That can help companies identify the business impact of problems in their applications.

“Connecting business concerns and technical ones is not a new thing — companies have forever been trying to align the two,” Yen said. “What’s making this possible in the observability world is making it easier to capture metadata that’s relevant to the business, like customer IDs or shopping-card IDs.”

Several of the new cloud-based observability vendors are able to overlay application monitoring with business process information, said John Carey, managing director in the technology practice at AArete, a global technology consulting firm.

“The new services can look beyond your internal infrastructure and provide intelligence as to what traffic it was linked to,” he said. “So you’re able to see how many sales are going through simultaneously, how many customers are doing account lookups, and from what devices. Observability has moved from just a technical-operations view to a business-operations view.”

Now observability has become a business-intelligence tool: “That’s where a lot of the new focus has moved to.”

Observability Gets Intelligent

Legacy monitoring tools often just pass along metrics, traces, or logs.

Modern observability platforms can save this data, and more, into a central store, where it becomes accessible to analytics platforms, machine learning, and AI.

“Observability is not a tool but a concept,” said Bruno Kurtic, founder and vice president of strategy at Sumo Logic, a machine-data analytics company. “It enables the observer to detect, diagnose, and explain behaviors in systems or applications.”

Realizing the potential of that concept requires technologies that can accept any data type, any signal, and perform sophisticated analytics even on unstructured data, he said.

Those technologies “need to be able to explain behaviors even if they have never observed those behaviors in the past,” he said. “To perform extraction, correlation, machine learning — whatever is necessary to understand what is happening to resolve the unknown unknowns.”

Observability Gets Integrated

No matter how a company is moving to observability, whether through tools provided by traditional vendors, through cloud providers, through new startups, or by building their own, today’s modern, distributed, hybrid computing environment requires that systems must be able to talk to one another.

Read More:   Chaos Engineering for Every Layer – InApps 2022

Today, the open source community is coming together on projects and standards to help make that happen.

“We have plenty of projects around observability and many vendors and customers who are helping grow some of the solutions,” said Bartek Plotka, principal software engineer at Red Hat, a hybrid cloud platform.

Plotka is also the tech lead on the observability technical advisory group of the Cloud Native Computing Foundation. He acknowledges some challenges in wider adoption of observability.

“Cloud providers are not super happy or motivated to provide you with very quick export functionality,” Plotka said.

But he’s seeing increased industry adoption of common protocols, including OpenMetrics and OpenTelemetry.

OpenMetrics, evolved out of Prometheus, has been around for five years, he said, and is furthest along.

OpenTelemetry, an observability framework for cloud native software, released version one of its specification earlier this year. The framework is gaining traction across the observability landscape, having garnered support from vendors like Honeycomb, Lightstep, Splunk, AWS, and more.

“Logging is simpler as a signal to replicate from place to place, but there is no official protocol that everyone is using, unfortunately,” Plotka said. “But we are working at it.”

One benefit of projects like OpenTelemetry is that they let companies do tracing on distributed applications, where different pieces of the project are in different places — not just in the cloud, but also in traditional, on-prem environments.

For example, Vanguard, a global investment management company, in the process of moving from data centers to the cloud, wound up with three different types of environments — traditional data centers, private clouds, and public clouds.

Observability solutions helped the company figure out what was happening across all three environments.

“We have hundreds of teams using OpenTelemetry and Honeycomb,” said Rich Anakor, chief solutions architect at Vanguard.

For example, one team wanted to move data to a new repository in the cloud. “They wanted to know all the dependencies that were involved,” Anakor said. “They wanted to know all the user actions and how they mapped back to all these backend procedures. They had been going at it for months with spreadsheets, with the code, with really smart people trying to solve this problem. But they couldn’t.”

One problem was that they were working with a legacy application in an on-prem data center.

“With OpenTelemetry and Honeycomb, they were able to answer these questions within minutes,” he said. “It really showed our teams that this is beyond just responding to an incident. You can actually understand how your systems are behaving.”

And companies aren’t just looking to leverage observability vendors and protocols like OpenTelemetry to get a handle on distributed systems in hybrid environments, but also in new types of cloud environments.

“Distributed tracing is getting more popular now, over large groups of microservices — as with projects like OpenTelemetry,” said Michael Fisher, group product manager at observability vendor OpsRamp.

Traditional application-management vendors can struggle to understand the full path an application must take to perform a request, he said.

That particularly hinders companies looking to meet service level agreements with their customers, or service level objectives for internal users, he said.

Waiting Is Not an Option

Every industry is seeing new entrants coming along that aren’t shackled by legacy systems and infrastructure, said RedMonk’s Governor.

That means that companies of all kinds need to get better at developing and managing software, he said.

“Look at German automotive companies,” he said. “They’re getting better at software and that’s because of one company — Tesla.”

He added, “Waiting around is not a good business strategy.”

Featured image by CHUTTERSNAP on Unsplash.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...