• Home
  • >
  • DevOps News
  • >
  • Making ML Deployments Easier, Keeping Models on Track – InApps Technology 2022

Making ML Deployments Easier, Keeping Models on Track – InApps Technology is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn Making ML Deployments Easier, Keeping Models on Track – InApps Technology in today’s post !

Read more about Making ML Deployments Easier, Keeping Models on Track – InApps Technology at Wikipedia

You can find content about Making ML Deployments Easier, Keeping Models on Track – InApps Technology from the Wikipedia website

While building out an engine to recommend news articles, Seldon founders Alex Housley and Clive Cox found the biggest challenge for companies was building out the infrastructure for machine learning rather than developing the algorithms for it. At the same time, they saw the growing popularity of cloud and cloud native technologies, so they created Seldon to help companies deploy and manage their machine learning infrastructure.

Its technology is designed not only to run ML models, but to provide governance and monitoring to ensure models run properly and effectively.

“Companies need to know their model is running as expected, customers need to know why it produces the predictions it does, DevOps need to discover if the model is being attacked, and data scientists need to know when to retrain it,” Cox said in a Q&A with Verdict.

Earlier on, the ML tools were Torch, scikit-learn and the like, then later TensorFlow and PyTorch.

“All the best tools now really are open source on the model-building side, but there was very little on the infrastructure side,” said Housley. Since the UK-based company’s launch in 2014, the MLOps space has really taken off.

“The next big trend was with Kubernetes and Docker and orchestrating containers, which were a perfect vehicle for machine learning models we found,” Housley said. Seldon is designed to be cloud-, tool- and language-agnostic.

Read More:   Best known Open-source Swift Libraries to use

“We’ve kind of narrowed our focus more as we’ve gone along. Initially, we saw that, for the end-to-end ML pipeline, there’s a number of really big challenges to overcome. And that’s what the large cloud vendors and some of the earlier ML platforms [were aiming at] — a one-stop-shop for end-to-end ML. But what we were seeing is that there was very little in terms of very deep R&D around deployment. And that was underserved purely in many cases, because that was a very early stage of the industry,” he explained.

“… A lot of our core competencies were around the DevOps space anyway. And we saw that DevOps as a kind of process and role was the increasing importance for application development in general. And a lot of the same kind of approaches really should be applied for machine learning, as well. So, we’ve focused really on that sort of end of the pipeline.”

Tool, Language Agnostic

Its initial product, Seldon Core, converts ML models (Tensorflow, PyTorch, H2O, etc.) or language wrappers (Python, Java, etc.) into containerized production REST/GRPC microservices. It extends Kubernetes by adding the custom SeldonDeployment resource, handling scaling to thousands of production models. It supports multiple deployment patterns including A/B tests, canary rollouts and multi-armed bandits, and provides advanced features including tracing, request logging, explanation, and visualization of model health.

Its features include:


The Core 1.5 release in December updated the Python wrapper to allow both REST and gRPC endpoints to be exposed for all inference graphs by default along with all the prepackaged servers. Istio and Ambassador configurations have been updated to allow both REST and gRPC configurations.

It has been integrated into Google’s KubeFlow project, and into ML offerings from vendors including Red Hat and others. The company is collaborating with companies including Google, Bloomberg, Nvidia, Microsoft and IBM on the KFServing project to provide a Kubernetes Custom Resource Definition for serving ML models on arbitrary frameworks.

  • Seldon Alibi is an open source Python library for machine learning model inspection and interpretation. It’s focused on black-box, white-box, local and global explanation methods for classification and regression models.
  • Alibi Detect is an open source Python library for outlier, adversarial and drift detection. It focuses on online and offline detectors for tabular, text, images and time-series data as well as images.
Read More:   Microservices Transformed DevOps — Why Security Is Next – InApps Technology 2022

“Users need to have visibility into concept drift and bias detection to ensure the reliability and fairness of ML solutions,” Cambridge Innovation Capital partner Vin Lingathoti wrote in his 2021 tech predictions. “This will be fundamental and mandatory across many organizations before they can roll ML models at scale.”

Cambridge Innovation Capital and AlbionVC led Seldon’s  £7.1 million (U.S. $9.8 million) Series A round last November, bringing its total investment to $13.7 million.

In an IDG article, Seldon engineering director Alejandro Saucedo points to three challenges companies face with AI/ML: algorithmic bias, explainability and accountability.

At the launch of Deploy 1.0 in February, UK law firm Osborne Clarke’s John Buyers warned that “explainability is a huge, looming regulatory issue that is going to be coming down the pipe.”

Explained Housley: “We’re focusing on explanation and monitoring, providing algorithms that are robust, broadly cover the main use cases that you’d need for explainability and monitoring. So within monitoring things like detecting outliers, detecting model drift, when the model is kind of going out of sync with the training data and reducing the performance ultimately of the model, the quality of the output. And there’s been other research we’ve done around things like adversarial attack detection, so there’s security-based use cases as well.”

Its customers include retailer H&M, Capital One and pharmaceutical company AstraZeneca, which has a stated mission to employ MLOps to reduce the research phase of the drug discovery cycle from 24 months to 12 months by 2025.

InApps Technology is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.

Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download


      Success. Downloading...