It is becoming increasingly apparent that enterprises must, in the near future, integrate machine learning (ML) in their software development pipelines in order to remain competitive in some way. But what exactly does AI and ML integration mean for any organization, whether a small-to-medium-sized company or a large multinational or a government agency with small offices?

In the recently recorded “Advanced Developer Workloads with Built-In AI Acceleration” livestream broadcast, Intel’s AI expert Jordan Plawner, director of products and business strategy, artificial intelligence, described the building blocks DevOps teams require to harness the power of AI and ML for application development and deployments. Hosted by Alex Williams, founder and publisher of InApps, Plawner was able to draw upon his experience to explain how data scientists and DevOps teams can take advantage of ML. Since his main role is to ensure Intels’ Xeon server processors and software are built to spec for DevOps teams’ workloads and library needs, Plawner is able to draw on his deep understanding of MLOps hardware and supporting software requirements.

“We are making it easy for developers and DevOps people to stand up and run AI-based workloads on our general-purpose processor Xeon,” Plawner said. “So, I’m responsible for the roadmap, and all the kind of software capabilities that we deploy for developers.”

At the end of the day, machine learning (MLOps) processes are somewhat comparable to those of DevOps processes, while also diverging in significant ways. ML is, to begin with, non-binary, as systems are taught to draw inferences and perform analyses in ways that can often mimic that of the human brain. ML, in this way, is significantly more complex than simple “if-then” computing, even when compared to software used to automate processes that might also fall under the AI computing category.

Read More:   5 Dashboard Design Principles and Best Practices

As an example, a continuous integration/continuous delivery (CI/CD) ML pipeline with TensorFlow — an open source platform that Google uses for its ML application-development needs — must first be “taught” how to process often enormously complex calculations. Once taught, the application can then be applied to run ML applications. Google also relies on the Intel oneAPI framework for its use of TensorFlow.

MLOps “is really taking the idea of DevOps and applying it to the deep learning space in the AI space to help automate a lot of these things that developers shouldn’t be spending their time doing,” Plawner said. “Data scientists should be experimenting and the DevOps people need good MLOps tools so that they can help orchestrate and manage that environment.”

The ML Mantra

Reflecting Intel’s philosophy, Plawner said “we take a very broad view of artificial intelligence.”

“MLOps is a set of very extensive techniques to allow you to gain insight from different kinds of data historically,” he said.

DevOps teams have become “used to seeing very structured data in rows and columns and databases but we now know that we also live in a sort of analog-to-digital world where we have lots of video, images and speech” that ML can translate and make recommendations, Plawner said.

ML process are often required to draw inferences and make actionable recommendations for data that historically “has been very, very hard to gain insights from what we call ‘non-structured data.’” These datasets might include data images that might not have a regular pattern or structure from which a non-ML-assisted application would be able to draw inferences.

Read More:   A Security Checklist for Cloud Native Kubernetes Environments – InApps Technology 2022

“What I think about AI is the machine-learning and deep-learning techniques, and all the end-to-end pipeline data management needed in order to gain insights from all the data that is collected,” Plawner said.

Virtualization and containers environments with libraries — supported mainly with Python — should increasingly be used to create the ML pipeline so that data scientists can worry less about operations and concentrate on their development work for ML applications.

“I’ll tell the developer, ‘okay if you want to read the white papers about all these innovations, that’s great. But if you just want the latest innovations, point and click on this container and download it…and just run it on the latest Intel processor,’” Plawner said. “So, I think really that’s what we’re talking about: that abstraction, as we just want to pile all this into a container and make it much easier to run.”

View the whole discussion here.