• Home
  • >
  • DevOps News
  • >
  • How Log Analysis Can Bring Front-End Engineers on Call – InApps Technology 2022

How Log Analysis Can Bring Front-End Engineers on Call – InApps Technology is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn How Log Analysis Can Bring Front-End Engineers on Call – InApps Technology in today’s post !

Read more about How Log Analysis Can Bring Front-End Engineers on Call – InApps Technology at Wikipedia



You can find content about How Log Analysis Can Bring Front-End Engineers on Call – InApps Technology from the Wikipedia website

At the time series-focused Influx Days in San Francisco, presenters offered many unique views of log data. From talks on better analyzing log streams to bitter warnings against identifying what’s “normal,” the one-day event featured a range of ways enterprises can apply new techniques. The goal: get their arms around the near-infinite supply of logging and monitoring data their systems generate.

Emily Nakashima, a front-end engineer at Honeycomb.io, gave a talk specifically targeted at bringing JavaScript front-end developers into the problem mitigation workflow. To do that, she said, you’ll need to extend your logging analysis all the way to the front-end JavaScript.

Her talk, titled, “What Your JavaScript Does When You’re Not Around,” offered many tips and hints for teams looking to bring front-end issue catching into the daily purview of JavaScript developers. With proper error catching and analysis in place, front-end engineers can also be “on-call,” alongside their back-end developer and administration co-workers, said Nakashima.

“There’s something nice about the whole team being on alert, and it does wonders for people to understand the effects of being on pager rotation. The monitoring team doesn’t know what JavaScript does. That team is usually very far away from JavaScript. It is worth bringing these teams together to talk about what JavaScript errors look like. The best tool is the one your team will use. It’s more important for data to be where developers will see it than for it to be the perfect tools,” said Nakashima.

Read More:   The Web App Firewall Is Dead and We Know Who Killed It – InApps Technology 2022

Troubleshooting Common Errors

While bringing the front-end team into the on-call rotation sounds exciting, it requires some deeper analysis of the actual errors the team is receiving from JavaScript. These, however, aren’t always so straightforward.

For example, Nakashima pointed out that it is not uncommon for third parties to copy and paste your entire web site’s front-end code in order to steal some small portion of functionality. This can cause havoc if things like New Relic are embedded into the page, sending spurious data about an unknown server to your logging flow.

Other pitfalls come from errors thrown by the client side, for reasons which are out of the engineers’ control. An example, said Nakashima, is the user having a broken browser plug-in installed. Generally, she said, a lot of monitoring products can filter out this type of noise automatically, but for home-rolled monitoring systems this can be a major pain; filtering out browser plug-in issues requires knowledge of hundreds of plug-ins.

Another area of danger is the use of ad blockers. Nakashima said most ad-blocking software will filter out requests to third-party sites and limit all information on a page to the host domain. This can block analytics and monitoring software, and thus, cause ad blocker users to be semi-invisible to the error tracking systems. The solution, said Nakashima, is to proxy all of those third-party systems through your host domain.

Then you’ll have to filter out the errors thrown by your third-party vendors when they push code changes. “This seems like a rare, one-time problem, but once you look at it you realize you see this kind of third-party code problem all the time,” said Nakashima.

Having this extended visibility into the front-end should also provide your team with deeper statistics than it may be capturing now. You should be tracking browser versions, installed fonts, color schemes, visibility, geolocation, and support for new browser APIs. Tracking this information will give you a better view of the technologies your customers are using, said Nakashima.

Read More:   Impact of Low quality Data on Business Performance 2023

Beware Automated Anomaly Detection

Elsewhere at Influx Days, Baron Schwartz, CEO of VividCortex told a cautionary tale about anomaly detection. He said that many vendors cropped up to offer automated anomaly detection, sometimes under different names, over the past four years. He added that it is impossible to do automated anomaly detection, and will always be so.

Schwartz founded VividCortex in 2012 to help companies better understand the queries they were running on their databases. The ultimate goal is to show which queries are jamming the system, providing better usage of the database overall.

Schwartz spent a lot of time experimenting with anomaly detection since founding VividCortex, and he said that all solutions fall down over the long haul. This comes from the fact that it is incredibly difficult to determine what exactly “normal” state looks like in a complex system.

“A monitoring tool isn’t supposed to give answers, it’s supposed to be an extension of your team. You should choose your monitoring tool the way you’d hire an engineer,” said Schwartz.

“Anomaly detection gets called a lot of different names: machine learning, big data, dynamic baselining, automatic thresholds.” A lot of these things are simply anomaly detection, and anomaly detection is predicting and forecasting. Really when enterprises are talking about anomaly detection, they want to find something that’s not normal. Their assumption is these systems that have not normal things going on are interesting to look at,” said Schwartz.

Unfortunately, while this sounds like a good idea, it’s not so true in practice, said Schwartz. Oftentimes, most of the activity going on in a system at any given time is abnormal and unpredictable. With so many moving pieces in most systems, it’s tough to distill normality into a single algorithm.

“Ultimately it gets boiled down to some equation somewhere that ends up being a proxy for what’s assumed to be normal, and you use that model to predict. You train the model on past data and say in the case of monitoring, you’re going to look at data as it comes in and say ‘is this data point anomalous?’” said Schwartz.

Read More:   Embrace Docker’s Rise and Don’t Get Fooled Again – InApps 2022

Not only is it incredibly hard to figure out what normal means in a system, it’s also hazardous to make a wrong guess. If the pagers inside your administrators’ pockets are going off every time something abnormal happens on your network, they’ll likely be going off every few minutes. Such a spamming of the pager would result in most administrators ignoring alerts entirely, which would create a major problem when something bad really happens.

Schwartz even tried to come at the anomaly problem from another direction: he attempted to measure the time between changes in host services. He hoped to find anomalies by detecting when the time between changes changed. As a result, however, Schwartz said he built the most useless spam generation machine he’d ever seen.

“The truth is, there’s all these wacky things happening in our systems all the time. They’re not actionable, they’re not diagnosable, and there’s nothing for you to do about it. On the other hand, if you build these models, even if you work hard you get lots of indications something abnormal happened, and the cost-benefit is exactly the reverse of what we as engineers are wired to think,” said Schwartz. Having these systems in place can create more work, essentially.

“Alerts that come in that are non-actionable immediately turn alerting systems into a Gmail filter to the trash bin. They create pager burnout. These results come out of a black box that’s not interpretable. The data is already highly digested. It is surprising how quickly you end up six or eight degrees away from the original input,” said Schwartz.

When the chips are down and a system is broken, added Schwartz, the last thing you want to do is try and figure out what some black box means when it tells you there’s a problem. This is why many administrators simply end up opening an SSH connection to the problem machine anyway: they need to see the root of the problem, not some blinking light that vaguely indicates there’s an issue.

Influxdata is a sponsor of InApps Technology.

InApps Technology is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Honeycomb.io.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...