PC & Computers

NVIDIA Beats AMD to Market On HBM2 – Announces Tesla P100

Artificial intelligence for self-driving cars. Predicting our climate’s future. A new drug to treat cancer. Some of the world’s most important challenges need to be solved today, but require tremendous amounts of computing to become reality. Today’s data centers rely on many interconnected commodity compute nodes, limiting the performance needed to drive important High Performance Computing (HPC) and hyperscale workloads.

NVIDIA® Tesla® P100 GPU accelerators are the most advanced ever built for the data center. They tap into the new NVIDIA Pascal™ GPU architecture to deliver the world’s fastest compute node with higher performance than hundreds of slower commodity nodes. Higher performance with fewer, lightning-fast nodes enables data centers to dramatically increase throughput while also saving money.

With over 400 HPC applications accelerated—including 9 out of top 10—as well as all deep learning frameworks, every HPC customer can now deploy accelerators in their data centers.

– See more at: http://www.nvidia.com/object/tesla-p100.html#sthash.kgIbCXqC.dpuf

NVIDIA has announced availability of their latest data center accelerator, the Tesla P100, which is the world’s first HBM2-powered add-in-card. this means that NVIDIA effectively beat AMD in time to market with HBM2 technology, which AMD pioneered (in its HBM form) with the Fury line of graphics cards.

NVIDIA naturally touts this as the world’s most advanced data center accelerator, for workloads such as “Artificial intelligence for self-driving cars. Predicting our climate’s future. A new drug to treat cancer.” NVIDIA’s green graphics show an almost 50x increase in computing power from 8x Tesla P100 accelerators when compared to a dual CPU server based on Intel’s Xeon E5-2698 V3 (which isn’t really all that surprising.) NVIDIA further brings in the PR talk with examples on how a single GPU-accelerated node powered by four Tesla P100s – interconnected with PCIe – can replace up to 32 commodity CPU nodes for a variety of applications – saving up to 70% in overall data center costs.

Source: Nvidia via Techpowerup

 

Related posts

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More