Deep neural network (DNN) applications grow in importance in various areas including internet search engines, retail and medical imaging. Intel recognizes importance of these workloads and is developing software solutions to accelerate these applications on Intel Architecture that will become available in future versions of Intel® Math Kernel Library (Intel® MKL) and Intel® Data Analytics Acceleration Library (Intel® DAAL).
While we are working on new functionality we published a series of articles demonstrating DNN optimizations with Caffe framework and AlexNet topology:
- Single Node Caffe Scoring and Training on Intel® Xeon E5-Series Processors
- Caffe* Training on Multi-node Distributed-memory Systems Based on Intel® Xeon® Processor E5 Family
- Caffe* Scoring Optimization for Intel® Xeon® Processor E5 Series
Technical details on optimizations we did in the technical previews are available in Intel Lab’s Pradeep Dubey blog:
- Myth Busted: General Purpose CPUs Can’t Tackle Deep Neural Network Training
- Myth Busted: General Purpose CPUs Can’t Tackle Deep Neural Network Training – Part 2
You can also take a sneak peek at Intel MKL DNN extensions programming model and functionality using Deep Neural Network Technical Preview for Intel® Math Kernel Library (Intel® MKL). The feedback we get with this preview is essential to shape future Intel MKL products.