WCCFTech

NVIDIA has today announced the latest in their range of Pascal-based GPU accelerators. The new Tesla P40 and P4 GPU accelerators are specifically designed to work with Neural Network systems, boosting AI inferencing speed up to 45 times over, and offering a 4x increase over the last generation of GPUs.

NVIDIA’s Tesla GPU accelerators are designed to accelerate any demanding servers and computer setups, speeding up data transfers, making virtual desktops more seamless and allowing Neural Network systems to process data significantly more quickly.

The new platform developed by NVIDIA, to be released with the P40 and P4 GPU accelerators, is designed to improve the deep learning experience. NVIDIA’s deep learning platform offers a training systems and deep learning frameworks, as well as an inferencing system with TensorRT, DeepStream SDK and the P40 and P4.

Specifically designed for inferencing, the P40 and P4 use neural networks to recognize images and text, as well as speech, given to the system by the user. Using the Pascal architecture, GPUs utilize 8-bit technology to significantly improve processing power, compared to last year’s cards.

P4 Specs

– Full 2560 CUDA cores
– Runs at 810 MHz base and 1063 MHz boost
– 50-75W package
– Higher power efficiency

P40 Specs

– Full GP102 GPU core
– 24GB of GDDR5 memory
– Runs at 1303 MHz base and 1531 MHz boost]
– Memory clocked at 7.2 GHz
– 384-bit interface
– 346 GB/s bandwidth
– 12 TFLOPS of FP32
– 47 TFLOPS of INT8