Hamza Tahir sees the machine learning and artificial intelligence ecosystem diverging between the groundbreaking research that has produced amazing capabilities in neural networks, natural language processing and other areas, and the ability for companies to actually apply this research in production.

For projects like OpenAI or GPT3, while there’s so much potential there, the production requirements are only within the realm of Google or other major tech companies, he said, while there are plenty of machine learning operations (MLOps) problems to be solved. That’s where his company, Munich-based Maiot comes in.

Creating the algorithms behind models requires a rigorous mathematical understanding of machine learning, he said, while deploying and running these models requires engineering know-how. That intersection of skills is pretty rare.

“And I have a fear that the courses that are taught about these things are a bit in the middle. They’re sort of taught the mathematics, they’re sort of taught the engineering, but they don’t really know much about either. And they go into the industry or into research, and they face huge shocks or a huge gap in understanding as to how to do either,” he said.

“So our biggest motivation at Maiot is to try to have tooling be a crutch for these people to actually go into, in our case, the engineering direction, so like the framework itself can be educational in a way, but also be like a portal into this ops world in a way that is more consumable and digestible for them.”

Read More:   Update Finding More Hidden Gems in Holt-Winters

Open Source ZenML Framework

Tahir, now the company’s chief technology officer for artificial intelligence, met co-founder Adam Probst at an entrepreneurship program associated with the Technical University of Munich. Initially, they focused their efforts on predictive maintenance in transportation, first looking at trucks then later monitoring the maintenance needs of 100 public buses.

“We actually had a lot of data we analyzed, and it was a pretty cool thing. But that’s really where we first ran into our MLOps problems. … where we first realized that doing something as a POC or in research is quite different from doing it in production. As you can imagine, making the models scalable, usable, reproducible…” he said, adding that in DevOps, there are standards for code and data to follow to ensure an application is reproducible and robust in production. Being such in such a rapidly growing, yet still somewhat young field, those standards are still being developed in MLOps, he said.

In response, the created Python-based ZenML, an extensible open source tool for creating machine learning pipelines.

It’s designed to address the problems such as versioning data, code, configuration, and models; reproducing experiments across environments; establishing a reliable link between training and deployment and tracking metadata and artifacts that are produced.

Users break down individual tasks into Steps that together in a sequence create a Pipeline. Each pipeline contains a Datasource, which grows as it represents a snapshot of a versioned dataset at a point in time.

Meanwhile, define where and how each step is run. There are three types of backends: orchestrators; processing backends that define the required environment; and training, solely for training pipelines.

By developing in pipelines, ML practitioners give themselves a platform to transition from research to production from the very beginning and are also helped in the research phase by the powerful automation introduced by ZenML.

The company maintains that by separating backends from the pipeline logic, ZenML achieves a Terraform-like scalability, extensibility and reproducibility for all its pipelines.

Read More:   A Look at GitOps for the Modern Enterprise – InApps 2022

Focus on Data Scientists

As an opinionated MLOps tool, like any other framework such as Ruby on Rails or Gatsby, users benefit from some issues worked out from them — what Tahir calls “batteries-included stuff” — rather than trying to build ML pipelines as Jupyter Notebooks, scripts or on dev machines.

Bootstrapped Maiot integrates with “best-in-class” offerings such as Seldon Core or Tecton while competing with the likes of Databricks’ MLflow, Kubeflow and Valohai.

Its core engine automates tasks such as tracking of input parameters for preprocessing as well as for model training, pipeline comparability and caching of pipeline steps and provides native modular backends from distributed preprocessing to training and serving, versioning, cloud integrations and scalability.

“But I think the special thing we’re bringing here is the interface that we’re building from the perspective of a data scientist. One of the things that we’ve noticed across our journey in the MLOps space, though there are other MLOps tools out there, there are ML tools that are built for ML people and there are ops tools built for ops people. But there weren’t really any ops tools build for ML people. And that’s the sweet spot we wanted to take to,” he said.

Data scientists might have a Ph.D. in physics, for instance, but don’t have a lot of coding experience.

“They’re more like people who are used to scripting, and we’re used to doing the Jupiter stuff … but that’s not their fault. It’s just how data science is taught. And that’s how their background and skill set is,” he said. “They’re not really equipped to deal with Kubernetes clusters or Docker or deployments or speeding up traffic and A/B testing all that. So when there’s a disconnect between these two things, the tooling has to fill that gap, because you cannot just make two teams and say, ‘OK, the [data science] guys make the model, then hand it over to the production guys, ops guys who don’t know anything about machine learning, and they just deploy the model.’ That’s never going to work. … So eventually, they had to converge together,” he said, equating it to the evolution of DevOps.

Read More:   Applied GitOps with ArgoCD – InApps 2022

“Because you know the most about your model, right? So that are we’re trying to do this, that we’re trying to build these higher-level abstractions across powerful ops tooling so that normal data scientists can use this framework and integrate into powerful machine learning and ops tools, but not have the feeling of doing something that they’re not comfortable with at all. That’s where our sweet spot is. That’s where we see the most interest from the community as well,” he said.

InApps Technology is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.