The idea of distributing machine learning is not a new one. Google was one of the first to implement it on a large scale by training Android phones for performing keyboard autocompletion. And with the growth of IoT, there are the stirrings for pushing down the processing of ML models toward the edge, a notion that is often associated with the emerging concept of fog computing. The idea isn’t new, but implementation and commercialization remain in their infancy. Now HPE has come up with “Swarm Learning,” a new twist on federated learning that uses blockchain technology.
Federated learning is an approach that trains models remotely on data sources where they live, and then only communicating the trained models back to a hub where the finished model is determined through a consensus process. The obvious use case is IoT where there are torrents of data generated by remote devices at a scale where it wouldn’t make sense to ferry all that data back to the cloud for model training or inference. Commercially, providers such as Integrate.ai and Devron are starting to deliver solutions for managing federated learning, but as just noted, this solution space is still in its infancy.
A potential drawback of these approaches is that they rely on a hub, which can become a choke point or single source of failure. That’s where HPE’s Swarm Learning approach comes in. it not only aggregates and pushes down training and inference workloads, but it does so with the use of blockchain approaches.
A first glance, what could appear to be more buzzword compliant than “Machine Learning” and “Blockchain?” But there is a real method to the madness. As envisioned, “Swarm Machine Learning” is targeted at scenarios involving privacy or regulation that preclude or discourage data from getting moved. That’s the rationale for the blockchain. For instance, you may have a group of hospitals that are cooperating in a study for applying machine learning to disease prevention, detection, or outcomes, but patient data is the immovable barrier.
HPE implements a blockchain based on Ethereum technology that allows data to stay in place, models trained and run locally, in an environment where model results that are exchanged become tamper-proof. HPE sets up a Swarm network where individual nodes register, and then those nodes perform the modeling.
It incorporates several components. It starts with Swarm Learning libraries that are delivered as containers that can run on any target infrastructure that is based on Kubernetes. The models themselves stay intact; HPE claims that the models can be deployed on the swarm with just four additional lines of code. Then there is the Swarm Network, which is the blockchain, a control plane, and a license server.
Potential Use Cases
There are numerous potential use cases for distributed learning. In the healthcare domain, hospitals around the world can apply ML to identify cancer on MRI images, providing a large base of training data that does not have to be moved into a central place. Financial institution consortia can collaborate to build fraud detection or personalization models across global data sets. Global marketing campaigns across franchisee retail networks are another example where modeling wealth could be shared in a manner that respects local ownership of data. And of course, there are the cases where global models span data that, by regulation, is not allowed to cross borders.
For now, Swarm ML is still at the early adopter phase for HPE, with plans to eventually productize it as part of HPE Ezmeral MLOps. While it doesn’t necessarily require a professional services engagement, we’d expect that early adopters will likely require expert help for the jumpstart.
Featured image by PollyDot from Pixabay.