NVIDIA Boosts World’s Leading Deep Learning Computing Platform, Bringing 10x Performance Gain in Six Months

0

Gain Driven by New Tesla V100 32GB GPU with 2x the Memory, Revolutionary NVSwitch Fabric, Comprehensive Software Stack; DGX-2 Is First 2 Petaflop Deep Learning System

nvidia_logoGPU Technology Conference — NVIDIA has unveiled a series of important advances to its world-leading deep learning computing platform, which delivers a 10x performance boost on deep learning workloads compared with the previous generation six months ago.

Key advancements to the NVIDIA platform — which has been adopted by every major cloud-services provider and server maker — include a 2x memory boost to NVIDIA® Tesla® V100, the most powerful datacenter GPU, and a revolutionary new GPU interconnect fabric called NVIDIA NVSwitch™, which enables up to 16 Tesla V100 GPUs to simultaneously communicate at a record speed of 2.4 terabytes per second. NVIDIA also introduced an updated, fully optimized software stack.

Additionally, NVIDIA launched a major breakthrough in deep learning computing with NVIDIA DGX-2™, the first single server capable of delivering two petaflops of computational power. DGX-2 has the deep learning processing power of 300 servers occupying 15 racks of datacenter space, while being 60x smaller and 18x more power efficient.

“The extraordinary advances of deep learning only hint at what is still to come,” said Jensen Huang, NVIDIA founder and CEO, as he unveiled the news at GTC 2018. “Many of these advances stand on NVIDIA’s deep learning platform, which has quickly become the world’s standard. We are dramatically enhancing our platform’s performance at a pace far exceeding Moore’s law, enabling breakthroughs that will help revolutionize healthcare, transportation, science exploration and countless other areas.”

Tesla V100 Gets Double the Memory
The Tesla V100 GPU, widely adopted by the world’s leading researchers, has received a 2x memory boost to handle the most memory-intensive deep learning and high performance computing workloads.

Now equipped with 32GB of memory, Tesla V100 GPUs will help data scientists train deeper and larger deep learning models that are more accurate than ever. They can also improve the performance of memory-constrained HPC applications by up to 50 percent compared with the previous 16GB version.

The Tesla V100 32GB GPU is immediately available across the complete NVIDIA DGX system portfolio. Additionally, major computer manufacturers CrayHewlett Packard EnterpriseIBMLenovoSupermicro and Tyan announced they will begin rolling out their new Tesla V100 32GB systems within the second quarter. Oracle Cloud Infrastructure also announced plans to offer Tesla V100 32GB in the cloud in the second half of the year…Click here to read full article.

 

 

Share.

Comments are closed.