Where will and AI/ML workloads be executed and who should handle them? The industry-wide rising tide towards the public cloud is not a foregone conclusion, as we were reminded by a Micron-commissioned report by Forrester Consulting that surveyed 200 business professionals who manage architecture, systems, or strategy for complex data at large enterprises in the US and China.

As of mid-2018, 72 percent are analyzing complex data within on-premises data centers and 51 percent do so in a public cloud. Three years from now, on-premises-only use will drop to 44 percent and public cloud use for analytics will rise to 61 percent. Those using edge environments to analyze complex data sets will rise from 44 to 53 percent.

Those figures make a strong case for the cloud, but they do not actually say the complex data is actually related to AI/ML. Many analytics workloads actually deal with BI instead of tasks that require high-performance computing (HPC) capabilities. While not all AI/ML workloads fall into that category, some do require hardware customized to maximize performance when training AI/ML models. Early adopters of AI/ML models actually have been relying more on the public cloud rather than their own equipment.

Currently, 42 percent of respondents exclusively rely on third-party cloud providers’ hardware built to build AI/ML models, but only 12 percent will do so three years from now. Instead, a majority will use a combination of both on-premises and public cloud. Many companies may have gone first to cloud providers because they wanted to quickly launch AI/ML activities. These same companies may migrate to on-premises environments for specialized workloads to reduce costs as they scale-up into production or use proprietary data.

Source: Micron/Forrester. Companies relying exclusively on cloud providers’ hardware to train AI/ML models will drop from 42 percent today to 12 percent three years from now. In the future, a majority of companies using hardware specialized customized for AI/ML will be working with a combination of on-premises and cloud environments.

Another reason to change computing environments is to increase performance. Forty-three percent of Forrester/Micron respondents say the locality of compute and memory is “critical” for AI/ML, with another 46 percent saying it is “important.” In response to this challenge, 90 percent are planning to move compute and memory closer for AI/ML workloads. In separate questions, over three-quarters said it is either critical or important to re-architect both memory and storage to meet their AI/ML training needs.

Read More:   Complete Guide on E-commerce Product Strategy

Chip vendors like Micron, Intel, Nvidia and AMD are all selling hardware optimized for AI/ML workloads. It appears that people think there is a need. The only question going forward is whether the cloud providers, enterprises or both will be the ones buying these processors. Furthermore, while some AI/ML workloads will be moving to the “edge,” the data does not conclusively prove whether this will be done internally or through a third-party cloud provider.

Context from Other Studies

For analytics in general, there are conflicting reports about the extent of cloud’s short-term reach. Eighty-three percent of IT decision makers think the cloud is the best place to run analytics according to a Teradata study.  Yet, we are skeptical of this finding because most data science work is not so big and regular that it requires a specialist cloud provider. In fact, a JetBrains survey found that 78 percent of data science specialists perform computations on local machines, while only 32% use a cloud service.

If you bundle AI/ML activities more broadly into the use of data infrastructure, then almost everyone is using a public cloud as part of their solution. The survey for O’Reilly’s “Evolving Data Infrastructure” found that 85 percent of respondents were using one of seven major cloud providers at least in part for their data.

The most common planned approach to AI/ML is to use a combination of bought solutions and homegrown tools and algorithms, according to TDWI’s “BI and Analytics in the Age of AI and Big Data.” However, there is limited demand for actually using pre-built models. As we reported earlier this year, while half of data specialists say their company’s data scientists create ML models, only 3 percent say they are using a cloud ML service for this purpose.

Source: TDWI

InApps Perspective

Janakiram MSV has recently written “An Introduction to the Machine Learning Platform as a Service” and “Build and Deploy a Machine Learning Model with Azure ML Service.” Stay tuned for many more articles as we research for an upcoming “Machine Learning Pipelines on Kubernetes” ebook.

Read More:   Why Service Ownership Is a Catalyst for Growth – InApps Technology 2022

Feature image via Pixabay.