Nvidia GPUs Nudge HPE Supercomputer into the Exascale – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn Nvidia GPUs Nudge HPE Supercomputer into the Exascale – InApps in today’s post !

Read more about Nvidia GPUs Nudge HPE Supercomputer into the Exascale – InApps at Wikipedia

You can find content about Nvidia GPUs Nudge HPE Supercomputer into the Exascale – InApps from the Wikipedia website

Enterprise IT systems provider Hewlett Packard Enterprise will start delivering and installing the computing components that will make up Polaris, a massive supercomputer that will be housed at the Argonne National Laboratory in Illinois and serve as a testbed for artificial intelligence (AI) and other projects for the lab’s upcoming Aurora exascale system.

Once Polaris is assembled and put online in early 2022, it will deliver more than four times the performance of the supercomputers currently being run at Argonne and, at 44 petaFLOPS (44 quadrillion  floating-point operations per second) peak performance, would rank as the ninth-fastest system on the twice-yearly Top500 list of the world’s fastest supercomputers, based on the most recent list released in June.

The supercomputer, which will include 2,240 A100 Tensor Core GPU accelerators from Nvidia, also will drive almost 1.4 exaFLOPS of theoretical AI performance, based on mixed-precision compute capabilities.

Polaris “is going to allow [Argonne’s] developers, their application holders, their engineers to start building out capabilities for accelerated computing at a grand scale,” Dion Harris, technical marketing leader at Nvidia, said during a press briefing about the supercomputer. “This is a very performant system, both in terms of AI as well as classic FP64 for first principles-based simulation. Therefore, we expect this to accelerate their core applications, as well as to set them up to have an incredible AI system, even when Aurora is brought online.”

Exascale on the Horizon

Aurora is one of three exascale supercomputers – along with El Capitan at the Lawrence Livermore National Lab and Frontier at Oak Ridge National Lab – being built in the United States and expected to go online in the next year or two. Exascale computing promises to enable researchers to run increasingly complex high-performance computing (HPC) workloads that current systems can’t handle and to help enterprises that are being overwhelmed with data and emerging workloads like AI and data analytics.

Read More:   Basic git Branching, Switching and Merging – InApps Technology 2022

Other countries, including China, Japan and the European Union, also are building exascale supercomputers as they and the United States compete to see who can establish themselves as leaders in exascale and supercomputing. Those that do will have an edge in areas ranging from scientific research and the military to health care and the economy.

In mid-2019, the U.S. Department of Energy (DOE) awarded longtime supercomputer vendor Cray the $600 million contract to build El Capitan. The company already was named to build Aurora and Frontier. HPE became the system vendor when it bought Cray in September 2019 for $1.3 billion, a move that greatly expanded its presence in HPC.

Polaris Takes Shape

Now the company is building Polaris. It will be based on 280 Apollo Gen10 Plus systems, which were created for HPC and AI environments and built with the exascale era in mind. The systems will use 560 2nd and 3rd Gen Epyc server processors from AMD for improved modeling, simulation and data-intensive workflows, and the GPUs will help drive the supercomputer’s capabilities for running AI workloads.

Aurora, like Polaris, will be an accelerated system, though using Intel’s Xeon Scalable processors and the chipmaker’s upcoming Xe-HPC “Ponte Vecchio” GPUs.

Polaris, which will run HPE’s CrayOS operating system, will use HPE’s Slingshot Ethernet fabric designed for HPC and AI environments – which the vendor inherited when it bought Cray – as the high-speed interconnect and HPE Performance Cluster Manager to monitor and manage the supercomputer to ensure optimal performance. For storage, the supercomputer will use Eagle and Grand, both 100-petabyte Lustre systems developed last year by the Argonne Leadership Computing Facility (ALCF), a DOE science site, and supported by HPE’s Cray ClusterStor E1000 platform. The Eagle system enables data sharing within the scientific community, according to Argonne.

Polaris has been talked about for about a year, but the Aug. 25 announcement brought with it many more details.

“Polaris is well equipped to help move the ALCF into the exascale era of computational science by accelerating the application of AI capabilities to the growing data and simulation demands of our users,” ALCF Director Michael Papka said in a statement. “Beyond getting us ready for Aurora, Polaris will further provide a platform to experiment with the integration of supercomputers and large-scale experiment facilities … making HPC available to more scientific communities. Polaris will also provide a broader opportunity to help prototype and test the integration of HPC with real-time experiments and sensor networks.”

Read More:   How Being Broke Inspired a Vibrant Open Source Community – InApps Technology 2022

Accelerated Computing is Key

Nvidia’s Harris said supercomputing has been the driver behind pushing the boundaries of what technology can do in helping to solve a broad array of challenges, including finding cures for cancer, exploring fusion energy and addressing climate change. However, researchers have been hindered in recent years by the slowing of Moore’s Law at a time when the size of problems and the amount of data keeps growing. The entrance of AI into the equation, and it being used for many internet applications, drew interest from scientists about how the technology can be used for their research.

Leveraging GPU accelerators like those from Nvidia will help drive the performance of such workloads running on Polaris, he said.

“When we talk about how the technology is going to be used, it’s really exciting to see that scientists can get started now,” Harris said. “Once we deploy the system early next year, they’ll be able to start working on these applications [and] to port them to an accelerated model. They can start leveraging AI to really build out the capabilities of how they can look at this converged HPC-plus-AI model. Then again, they can start testing some of those theories by getting a head start on that leveraging Polaris.”

Once online, Polaris initially will be used by researchers participating in such programs as the DOE’s Exascale Computing Project and ALCF’s Aurora Early Science Program, which was created not only to enable scientists, engineers and other users to prepare key applications to run on a system of Aurora’s architecture and scale and to get libraries and infrastructure in place for other production applications, but also to tackle projects that current supercomputers can’t.

Those include projects ranging from advancing cancer treatments and addressing the United States’ energy security while reducing the impact on the climate to conducting particle collision research in the ATLAS experiment, which employs the Large Hadron Collider particle accelerator in Switzerland.

Within a few months of Polaris going online, it likely will be opened up to the wider research community, Harris said.

Feature Photo by Javier Esteban on Unsplash.

Source: InApps.net

List of Keywords users find our article on Google:

argonne polaris
hpe apollo
hpe products
hpe gen10
hpe and amd
hpe mobility solutions
hpe software products
hpe partner program
hpe partner ready program
hpe country selector
about hpe
hpe gen10 server
hpc workloads
hpe support
hpe software
hpe epyc
apollo hpe
hpe products list
exascale supercomputing
hpe hpc
hpe services
hpe healthcare it
slingshot interconnect
hpe cluster
hpe nvidia
hpe security services
hpe hpc solutions
hpe gen 10
hpe converged infrastructure
polaris 280
supercomputing and exascale
hpe tech care
hpe machine learning ops
hpe jobs
wiki hpe
about nvidia
hpe apollo 10 series
apollo 20 hpe
hpe converged systems
hpe mobility services
hpe compute
hpe converged
hpe amd
exascale supercomputer
vmware horizon cloud on microsoft azure
polaris speed key
polaris wikipedia
hpe driver
hpe mixed use
hpe education services
hpe vmware
client success story aurora clinics
outsource accelerator
ankr linkedin
hpe amd epyc
hpe system
intel scalable system framework
hpe top value
hpe it products
polaris slingshot storage
hpe converged system
hpe consumption-based vm solutions
is there any storage in a polaris slingshot
hpc cluster manager
hewlett packard labs
hpe converged solutions
hpe content manager
argonne national laboratory jobs
intel xeon scalable processors
amd hpe
buy hpe server online
hpe apollo platform manager
hpe proliant gen10
hpe usa
polaris 2022
intel ponte vecchio
nvidia military contract
china exascale supercomputer
hewlett-packard linkedin
hewlett packard linkedin
hpe ux
aurora qc saas
polaris slingshot wiki
peak performance wikipedia
peak ai linkedin
hpe executive briefings
hpe icon
nvidia gpu wiki
tensor wiki
epyc hpe
wawa menu online
hpe review
simulation wikipedia
site hpe
hewlett packard enterprise customer service
nvidia list wiki
hpe customer stories
cluster manager meaning in retail
hewlett packard enterprise jobs
hpe artificial intelligence services
polaris hrm
hpe cancer
gen10 plus
cancer care at home livermore
hpe epyc server
wawa online
hpe logo white
02 polaris edge x 600
exascale supercomputers
hpc fabrics
lawrence livermore national laboratory jobs
hpe server drives
lustre client
polaris offshore
“hpe products
nvidia supercomputer
www hpe
hpe partner marketing
hpe site
nudge education jobs
house of japan polaris
nudge global
interconnect architecture & prototyping
nvidia software
the argonne house
hpe consulting
hpe convergedsystems
argonne national laboratory postdoc
dion harris
hpe learning
hpe 스토리지
hpe company profile
hpe cray
hpe mobilitysolutions
hpc hpe
hpe data storage
intel e1000 driver
2019 slingshot specs
polaris gpu
tensor core gpu list
argonne national laboratory
hpe gen10 servers
serveurs hpe
azure gpu
hpe cloud tiering
accelerated ux design program
inter-gpu communication
cray exascale supercomputer
Rate this post
Content writer

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...