Researchers ‘Drop the Zeroes’ to Speed Deep Learning – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn Researchers ‘Drop the Zeroes’ to Speed Deep Learning – InApps in today’s post !

Read more about Researchers ‘Drop the Zeroes’ to Speed Deep Learning – InApps at Wikipedia



You can find content about Researchers ‘Drop the Zeroes’ to Speed Deep Learning – InApps from the Wikipedia website

Researchers at the King Abdullah University of Science and Technology (KAUST) are now proposing a method of accelerating distributed deep learning by dropping data blocks with zero values, which are frequently produced during distributed machine learning processes that use large datasets.

The growing amount of data needed to train increasingly complex AI models are prompting experts to look for more efficient ways to train deep neural networks. One approach is to implement what is known as distributed deep learning, which scales out the training of models over a wider base of computational resources.

While this form of distributed machine learning is more efficient, the size of newer and larger deep neural networks for computationally intensive NLP models like BERT and GPT-3 will soon outstrip the computational capacity of current state-of-the-art supercomputers.

Distributed deep learning is often achieved with data parallelization, a form of parallel computing that distributes the data across different parallel processor nodes, thus boosting efficiency by splitting the computational load across a broader range of resources.

The researchers’ method focuses on what is known as collective communication routines, which are a core component of parallel computing applications, and used to combine data among multiple processes that are operating simultaneously. Collective communication routines have to perform smoothly in order to efficiently scale the workload.

Read More:   GitPod OpenVSCode Server Brings Visual Studio Code to the Browser – InApps 2022

“To enable better scaling, we aim to decrease communication overheads by optimizing collective communication,” wrote the team in their paper, which was presented as part of the 2021 ACI SIGCOMM Conference. “These overheads are substantial in many DNN workloads, especially for large models where there exists a significant gap between the measured performance and ideal linear scaling.”

Dropping Zeroes Speeds up Distributed Deep Learning

During model training, learning tasks are allocated to various computing nodes that compare their results before performing the next job over the communication network. According to the team, communication between these nodes is a major bottleneck in distributed deep learning.

“Efficient collective communication is crucial to parallel-computing applications such as distributed training of large-scale recommendation systems and natural language processing models,” said the team.

As model size grows, the researchers also observed that the proportion of zero values in the data blocks also increased, leading to a phenomenon known as sparsity. While there are already some existing tools for collective communication routines, the team noted that such collective communication libraries don’t support sparse data, which led them to develop their idea.

“We propose OmniReduce, an efficient streaming aggregation system that exploits sparsity to maximize effective bandwidth use by sending only non-zero data blocks. Most existing collective libraries — including DDL-specialized ones like NCCL and Gloo — have no native support for sparse data. These libraries assume dense input data and make inefficient use of precious network bandwidth to transmit large volumes of zeroes.”

OmniReduce builds on an earlier development from KAUST called SwitchML, which uses an aggregation code to optimize the network switches that govern internodal communications, thus increasing the efficiency of data transfers. OmniReduce further streamlines this process by dropping any results with zeroes, without interrupting the synchronization of the parallel computations between nodes. As the team notes, it is challenging to exploit sparsity in this manner, as all nodes have to process data blocks in the same location in a time slot, so coordination is of paramount importance.

Read More:   The Power of Prototyping in User Experience Design – InApps Technology 2022

“Coordination is key to sending only the non-zero data,” explained the team. “The aggregator globally determines the positions of non-zero values among [nodes] in a look-ahead fashion based on the next position metadata efficiently available at the [nodes] (which communicate it to the aggregator). This component differentiates OmniReduce from any related work.”

In testing OmniReduce against other existing collective libraries like NCCL and Gloo, while running six popular deep neural net models like BERT and ResNet152, the team of researchers found that OmniReduce performed well, boosting training times by up to 8.2 times.

In testing OmniReduce against other existing collective libraries like NCCL and Gloo, while running six popular deep neural net models like BERT and ResNet152, the team of researchers found that OmniReduce performed well, boosting training times by up to 8.2 times.

In testing OmniReduce against other existing collective libraries like NCCL and Gloo, while running six popular deep neural net models like BERT and ResNet152, the team of researchers found that OmniReduce performed well, boosting training times by up to 8.2 times. They also found that OmniReduce was effective for large-DNN distributed training jobs with multi-GPU servers.

In addition, the team ran tests pitting OmniReduce against other state-of-the-art sparse collective communication solutions running on TCP/IP and RDMA networks like AllReduce, SparCML, and Parallax, and discovered that OmniReduce outperformed these competitors by 3.5 to 16 times.

The results of pitting OmniReduce against other state-of-the-art sparse collective communication solutions running on TCP/IP and RDMA networks like AllReduce, SparCML, and Parallax,

The results of pitting OmniReduce against other state-of-the-art sparse collective communication solutions running on TCP/IP and RDMA networks like AllReduce, SparCML, and Parallax,

“The performance benefit of OmniReduce is two-fold,” said the team. “First, OmniReduce is much more scalable, and both speedup factors grow with the number of [nodes] because OmniReduce’s time does not depend on the number of [nodes]. This speedup is fundamental and exists even with a dense input. Second, in contrast to ring AllReduce, OmniReduce only sends non-zero elements, which reduces the time proportionally.”

The team is now working to adapt OmniReduce to run on programmable switches utilizing in-network computation to further boost performance. So far, OmniReduce has been adopted for training large-scale workloads at Meituan, a huge shopping and on-demand delivery platform based in China.

Read More:   MongoDB 4.4 Promises Less Work for Database Developers – InApps 2022

Images: Kelly Lacy via Pexels; KAUST



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...