• Home
  • >
  • DevOps News
  • >
  • Machine Learning Finds Its Place in the Production Pipeline – InApps 2022

Machine Learning Finds Its Place in the Production Pipeline – InApps is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn Machine Learning Finds Its Place in the Production Pipeline – InApps in today’s post !

Read more about Machine Learning Finds Its Place in the Production Pipeline – InApps at Wikipedia



You can find content about Machine Learning Finds Its Place in the Production Pipeline – InApps from the Wikipedia website

KubeCon + CloudNativeCon and InfluxData sponsored this podcast.


Machine Learning AI Finds its Place in the Production Pipeline

Also available on Apple Podcasts, Google Podcasts, Overcast, PlayerFM, Pocket Casts, Spotify, Stitcher, TuneIn

Machine learning-aided artificial intelligence (AI) might one day be able to eventually emulate the intelligence of hundreds or even thousands of human brains simultaneously, in such a way that human input would be obsolete throughout the software development cycle. In theory, a single system could not only replace a hundred-member DevOps team but assume the roles and tasks performed by hundreds of similar-sized DevOps teams. You could easily imagine, like taxi and truck drivers, the days of the software developer are numbered — except they really are not.

As far as thinking outside of the box or finding ways to write elegant and creative code or when chaos occurs, AI is largely lost. This but only partially explains why machine-learning taught computers may never be able to create art or write poetry to the extent a human can, while the mass replacement of men and women in the software development and operations should thus not happen anytime soon.

But what machine learning is already good at, Nick Durkin, field chief technology officer for machine learning-based continuous delivery services provider Harness said during this episode of InApps Makers podcast, is assuming a lot of the more data-crunching and mundane tasks in the production and deployment pipelines.

Read More:   Update Kasten: Data Management for Kubernetes

“This is a journey that people are taking and what’s interesting is I don’t think everyone knows what’s at the end of the rainbow. As we move closer and closer to continuous deployment and to continuous delivery (CD), people are obviously continuing to enable that,” Durkin said. “Developers are able to actually deploy their code in the production if it succeeds all their tests and it succeeds all their requirements regulatory and so forth. But what happens when it gets into production is the interesting part for ML and this is where traditionally, there are also lots of humans.”

ML is already finding its way into CD — at least with what Harness offers — in a number of ways, including by building pipelines, continuous verification, real-time analytics and even rollbacks if a deployment goes awry. Not to be confused with if-then programming-created automation, the ML is relying on neural network data to perform the tasks.

“Let’s say an SRE, site reliability or ops teams are having to monitor applications as they go out. And for a lot of folks, that’s a big war room when the deployment happens,” Durkin said.  “And if you want to increase that deployment from whatever it is and… you don’t have enough DevOps guys to go and do that” the ML can take over.

Once implemented, the ML adds automation to the pre-production pipelines when looking at keyways, staging and multiple other tasks, Durkin said. “Some of these becomes automated quite quickly where people have that goal for continuous delivery or deployment,” Durkin said.

In addition to automating QA staging in the pre-production environment and other tasks, ML can allow DevOps to “start looking at your logging as it’s coming out,” Durkin said. “You can look at your metrics but also in your production deployments,” Durkin said.

In many ways, the essential role of ML is to process data in ways that DevOps do not have the manpower to do to the extent they need during the delivery and deployment cycles. “Humans can’t handle the amount of data that’s coming at us if we increase deployments,” Durkin said. “So, where I see ML starting to be used is now in the production pipelines where [organizations] can actually start using that machine learning and neural nets during production, to take over for those people that we can’t actually even find.”

Read More:   Cloudflare’s Network Shutdown Shows Why DNS Is a DevOps Problem – InApps 2022

In this Edition:

1:41: How did you become involved in machine learning and AI?
7:54: At what point is the neural network coming into play? What is it teaching itself prior to that use?
12:57: When do they start to make use of ML right now the way that you described it?
20:15: The confusion regarding the difference between AI, ML, and basic automation.
27:48: For the smaller organizations that may not even need Kubernetes, can they run this on your application on-premises or is that not feasible?
31:00: Moving things forward into chaos and where ML fits into that.

Feature image from Pixabay.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...