• Home
  • >
  • DevOps News
  • >
  • Red Hat Fills a Gap with OpenShift Data Science – InApps Technology 2025

Red Hat Fills a Gap with OpenShift Data Science – InApps Technology is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn Red Hat Fills a Gap with OpenShift Data Science – InApps Technology in today’s post !

Key Summary

This article from InApps Technology, authored by Phu Nguyen, details Red Hat‘s launch of Red Hat OpenShift Data Science as a field trial in 2022, a managed cloud service tailored for AI/ML workloads on Red Hat OpenShift. It addresses enterprise needs for streamlined AI/ML development and deployment, building on Red Hat’s expertise and the Open Data Hub open-source project. Key points include:

  • Overview:
    • Purpose: Provides a managed environment for AI/ML, filling a gap for enterprises needing core AI/ML components without complex partner suites.
    • Origin: Evolved from Red Hat’s internal AI/ML features (e.g., Red Hat Insights) and the Open Data Hub, driven by customer demand for a standalone service.
    • Field Trial: A “code-ready” product with SRE support, seeking feedback to ensure market fit before full general availability (GA).
  • Core Components:
    • Built on a subset of Open Data Hub tools: JupyterLab, TensorFlow, PyTorch, SciKit, Pandas, and NumPy.
    • Deeply integrated with OpenShift for repeatable lifecycle processes, leveraging Red Hat’s DevOps expertise (e.g., Tekton for pipeline management).
  • Use Cases:
    • Beginners: Simplifies training and experimenting with ML models.
    • Advanced Users: Enables running the latest AI/ML tools without management overhead, with bi-weekly updates to keep pace with rapid changes.
    • Talent Shortage Solution: Allows engineers to perform modeling tasks, reducing dependency on data scientists and enabling faster progress with validation.
    • Operations-Friendly: Models are deployed as containers, aligning with operations teams’ familiarity, easing integration with source systems.
  • New Features:
    • Intel oneAPI AI Analytics Toolkit and OpenVINO Pro for Enterprise support for optimized AI workloads.
    • Integrations with Anaconda Commercial Edition, IBM Watson Studio, Seldon Deploy, and Starburst Galaxy for enhanced functionality.
    • Future plans include support for NVIDIA GPUs, Google Cloud Platform (GCP), and Microsoft Azure, aiming for a robust ModelOps environment (end-to-end AI/ML lifecycle automation).
  • Red Hat’s Vision:
    • Aligns with Red Hat’s DevOps DNA, focusing on developing, publishing, and monitoring AI/ML models in a repeatable, automated lifecycle.
    • Aims to standardize AI/ML processes as industry standards mature.
  • InApps Insight:
    • InApps Technology leverages AI/ML and DevOps practices, integrating Microsoft’s Power Platform and Azure, using Power Fx for low-code AI solutions and Azure Durable Functions for scalable workflows.
    • Combines Node.js, Vue.js, GraphQL APIs (e.g., Apollo), and Azure to deliver AI-driven, cloud-native solutions, targeting startups and enterprises with Millennial-driven expectations.
Read More:   Update How Time-Series Databases Can Benefit Video Game Developers

Read more about Red Hat Fills a Gap with OpenShift Data Science – InApps Technology at Wikipedia

You can find content about Red Hat Fills a Gap with OpenShift Data Science – InApps Technology from the Wikipedia website

Following its initial launch earlier this year, Red Hat has released Red Hat OpenShift Data Science as a “field trial”. The managed cloud service provides enterprises with an environment tailored for artificial intelligence and machine learning (AI/ML) on Red Hat OpenShift.

According to Steve Huels, senior director for AI product management at Red Hat, AI/ML is nothing new for Red Hat, and the origins of Red Hat OpenShift Data Science lie in the company’s experience building AI/ML features into its own platform, with things like Red Hat Insights. Eventually, Red Hat took that experience and codified it into the Open Data Hub open source project, but Huels explained that Red Hat’s customers were looking for a managed service they could buy that would provide these capabilities.

“They were like, this is fantastic, it did everything we wanted to do, can we buy it? And the answer was always around a set of Red Hat components and partner components, but that left a little spot in the middle that nobody was offering in a standalone capacity,” said Huels. “You could always buy a much larger suite from a partner, but it came with a lot of other stuff that maybe you weren’t ready for or didn’t need. Ultimately, that is what led to OpenShift Data Science. It was able to fill this gap that partners were having around a core set of AI/ML components.”

Built on a Subset of Components

Red Hat OpenShift Data Science is built upon a subset of the components offered in Open Data Hub, such as JupyterLab, Tensorflow, PyTorch, SciKit, Panda, and NumPY, which it then integrates with more deeply in Red Hat OpenShift and offers SRE support around, as part of the managed service. Part of the integration also centers around the ability for users to perform repeatable actions, rather than having to configure and deploy from the ground up each time, explained Huel.

Read More:   Update Why and When You Need Transactional DDL in Your Database

“Part of the challenge also with AI/ML is how do we put this stuff into a repeatable lifecycle process, right? It’s one thing to do it as a one-off experiment, it’s another to be able to take that, do it repeatedly, do it routinely,” Huel said. “This is where Red Hat’s DNA within DevOps really came into play. We’ve been managing things like for developers on the GitOps lifecycle for a long, long time,” he added, pointing to things like Red Hat’s use of Tecton for pipeline management.

Use Cases

As for who should consider using OpenShift Data Science, Huel said that the service has something for everyone, from those just getting started with AI/ML who want to more easily try out training models and using ML, to those who want to run the latest tools but don’t want to be bothered with managing them. Red Hat puts out a new release every two weeks, helping data scientists keep up with the rapid change to the components, and operations teams don’t have to do anything out of the ordinary to keep things up to date.

“When it comes time to take that model back in house and deploy it closer to whatever the source system is, the operations side is comfortable with that because the model comes out as a container,” explained Huel. “It helps alleviate that tension where both communities get what they need, and they both feel comfortable with the working environment.”

Another potential use case for OpenShift Data Science, explained Huel, was for organizations that found themselves unable to hire data scientists for one reason or another.

“This helps fill a gap where you can take really good engineers, and they can do some of that modeling themselves,” said Huel. “Instead of being dependent on the data science community, they’re able to do some of that work, and then get it validated by the data scientists, but be able to move forward more rapidly. It helps alleviate some of that talent shortage concern that’s out there, and keep folks moving forward.”

Read More:   Puppet’s Open Source Tool Manages Tasks Across Nodes – InApps Technology 2022

New Features

This latest version of Red Hat OpenShift Data Science comes with several new features, including the Intel oneAPI AI Analytics Toolkit and support for Intel OpenVINO Pro for Enterprise, as well as new integrations for Anaconda Commercial Edition, IBM Watson Studio, Seldon Deploy, and Starburst Galaxy.

With the service’s release as a “field trial,” Huel said this was another thing that Red Hat was starting as a parallel to the usual alpha, beta, general availability (GA) cycle found with traditional on-prem software. The field trial, he said, provides a “code-ready product” with support and SRE monitoring, but doesn’t put it at the level of GA quite yet. Instead, Red Hat is looking for feedback and to make sure it has a market fit before it is fully released as GA.

Moving forward, the service is expected to gain support for NVIDIA-accelerated computing with NVIDIA graphical processing units (GPUs) and the addition of Google Cloud Platform (GCP) and Microsoft Azure. In more broad terms, Huel said that Red Hat wants to expand into providing “a more robust model serving environment”.

“We want to be able to provide the full end-to-end ModelOps, because that really is the automated realization of the full DevOps lifecycle: develop it, publish it, monitor it, rinse and repeat,” said Huel. “These things are true to Red Hat’s DNA. We’ve got a long history there, we can plug these in. It’s just a matter of the standards in the AI/ML world kind of settling, and there being some agreement there.”

Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...