Intel and Facebook accelerate PyTorch performance with 3rd Gen Xeon Processors and Intel Deep Learning Boost’s new BFloat16 capability
.Intel and Facebook have previously demonstrated the benefits of BFloat16 (BF16) across multiple deep learning training workloads with the same accuracy as 32-bit floating-point (single-precision) (FP32) and with no changes to the training hyper-parameters. Today, Intel is announcing the 3rd Gen Intel® Xeon® scalable processors (formerly codename Cooper Lake) with Intel® Deep Learning Boost’s (Intel DL Boost) new BF16 technology to accelerate training and inference performance. This comes in addition to the support for the Intel DL Boost INT8 technology introduced last year.
Source: www.intel.com/content/www/us/en/artificial-intelligence/posts/intel-facebook-boost-bfloat16.html
June 19, 2020
Subscribe
Login
Please login to comment
0 Comments