How to Adapt and Thrive – InApps is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn How to Adapt and Thrive – InApps in today’s post !

Read more about How to Adapt and Thrive – InApps at Wikipedia



You can find content about How to Adapt and Thrive – InApps from the Wikipedia website

David Hayes

David Hayes is a Product Leader for PagerDuty, a company he has been at for the past five years. Previously, David had also worked as a Program Manager for CrossCap and a Program Manager for Microsoft.

“Change is the only constant” — This time-worn phrase has special significance for developers. We all know our role as developers is constantly morphing, and in today’s environment of cloud computing, distributed computing and increased customers expectations, developers wear many hats.

The problem is these hats aren’t necessarily the ones we want to wear, or signed up to wear. Developers are now spending less time than ever — 22 percent — writing code and instead find themselves focused (and evaluated) on enhancing and maintaining scalability, improving customer experience, boosting service efficiencies and lowering costs. This conundrum I’ve just described is also called DevOps.

Measuring the Success of DevOps

There’s little doubt anymore that DevOps has evolved from being the bright, shiny object that everyone talks about to being the necessary reality. Of course, from the outset, it was always the case that if you are a developer you will be spending a fair amount of your time on the Ops side. It’s your responsibility to do both. At the same time, doing anything you don’t know how to do and don’t want to do is frustrating. And it’s difficult to have at least half our jobs consumed by doing something we didn’t necessarily go to school to learn.

There’s little doubt anymore that DevOps has evolved from being the bright, shiny object that everyone talks about to being the necessary reality.

Just as the developer’s role is changing, so too have our ways of measuring not only our progress but also the effectiveness and performance of our code, tools and processes. This increased visibility is a good thing, overall. When it comes to measurement, there is a whole bunch of wrong ways to do it, among them measuring lines of code or the number of deploys. In fact, this very visibility, inherent to all of the data we now constantly measure, has been a great boon to DevOps.

Read More:   Why should I choose outsourcing to Vietnam?

Some of you likely remember the days when our “scoring system” was whether or not your feature was deemed worthy enough to be featured on the CD boxes that packaged the software. Now, though, we can actually know with great granularity what features get used and how quickly they’re adopted (or not). But it’s not just the usage metrics that matter here; it’s also metrics of performance, uptime, and segmentation of users, among others.

Here’s an example: Let’s say I have a simple feature called “bulk user import” and I discover there’s low usage, but that’s okay because I was expecting it to be low. What I can do now is track the behavior of users I’ve imported versus the users who signed up themselves or came through a service some other way. So, I might very well conclude that, in isolation, my feature is succeeding, but for the rest of the users we’re not giving them the support (or whatever it may be) they need to use the product.

Here’s what this example tells us about a new measurement process available to developers: We’re nearing a point where product teams are able to claim they own a particular metric, they nailed it, so reward them with more money or resources or notoriety within the organization.

In my view, the three most significant ways to accurately evaluate the performance of product teams are:

  • Measuring usage data.
  • Obtaining real data on abandonment.
  • Segmentation of those who use the product and don’t.

In the case of my company, this last one is significant. Users of the PagerDuty API are, on average, far more successful. Why? Because if you’re dedicated enough to a piece of software that you’re writing your own custom code on top of, you’re invested in the product already and, by definition, it’s then successful. This same mindset about dogfooding can be found in many other successful product teams.

Read More:   Update How Machine Learning Attacks the Problems of Database Performance

As for developers shifting roles, these changes in many organizations comes from, in many cases, the rise of distributed systems. There’s a host of reasons for their rising popularity: scalability, ability to tamp down costs, improvements in performance and flexibility. One less frequently known advantage of distributed systems, though, is their ability to allow you to enable internal SLAs among teams, yet another metric for understanding success.

An example of how this is helpful is, say you have a SaaS product and the target is a consistent page load time of two seconds or less. Now, if you can’t tell what elements are consuming those two seconds, obviously it’s not an easy thing to fix. But with internal SLAs among your different teams, it’s possible to examine the pipeline and find that the front-end rendering code takes 2 milliseconds, the database requires 1 second, and the routing needs 0.5 seconds. You know precisely where the problems are and what needs to be fixed and by which team.

Distributed Systems, Tools, and the Developer

Of course, few things in life come “upside only.” There are disadvantages to distributed systems. First, they’re inherently more complicated due to concerns of heterogeneity, asynchronous communications challenges and partial failures. Distributed systems, due to their complexity, also contribute to the multiple hat-wearing problems I discussed earlier. An additional downside is that migrating off a legacy system to a distributed system requires lots of work that doesn’t introduce new functionality; it’s akin to a tax you’re just required to pay.

As a result, what’s happening is that people are writing their new stuff on microservices, for example, and slowly chipping away at the legacy systems. For us at PagerDuty, this is a great thing because we have the ability to monitor multiple disparate ecosystems, whether legacy or distributed.

So what does this all mean for the developer’s role and how to thrive and adapt as it changes? First of all, developer contributions are becoming more visible because of measurement and more advanced tools, meaning developers are part of an iterative process sooner. Whether they have a “good” or “bad” idea, it’s now possible to get it out there, iterate on it, see whether it works and reject it if necessary.

Read More:   Update Why and When You Need Transactional DDL in Your Database

Second, whoever has the better tooling has the advantage. If you’re able to accurately measure adoption and additionally can segment your adoption or tell how quickly something is loading, you’re demanding more from your tools, and, therefore, your tooling becomes a competitive advantage. For instance, with PagerDuty, you can have multiple teams using specialized tools on top of us, and if you need another application performance management (APM) tool, you can easily plug it in while leaving your other systems untouched.

Metrics and measurement also lead to another way to adapt and thrive: kill dying projects earlier.

When it comes to adapting and thriving in this new environment, it’s imperative to release early and measure. In fact, if you’re not already releasing features to a small segment of your customer base, you’re already behind. When you’re releasing early and measuring your features, you’re also measuring the upstream and the downstream impact. Our systems are producing and capturing more data than ever before, so measuring as much as we can and extracting insight is critical.

Metrics and measurement also lead to another way to adapt and thrive: kill dying projects earlier. With a waterfall project, for instance, you can spend a year working on something, then throw a launch party and celebrate. But in today’s world, you might spend four months on a project, release it, spend a couple of months tweaking but it’s still not being widely adopted. Guess what? Let it go and move on. We’ve got all this data now, so use that to inform your decision. The 80/20 rule applies here: rely 80 percent on the metrics and 20 percent gut. Of course, you can rely on your gut about the relationship among the metrics themselves. This way you can use your intuition to develop the narratives and then track the metrics on those narratives.

In my years as a developer, I’ve lived through the evolution from Windows NT in the corporate environment to Linux, from server networks to full-fledged data centers, to today’s world of distributed systems and software eating the world. It’s no doubt an exciting time to be a developer but at the same time, it can be taxing and overwhelming. Today’s developer has more power, and to end on another time-worn phrase, “with great power, comes great responsibility.”

PagerDuty is a sponsor of InApps.

Feature image by Jason Black, via Unsplash.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...