When we see that deep learning models are being trained, NVIDIA might bet full attention. However, Intel is not sitting quietly, just staring at the massive AI opportunity. It is moving fast in delivering end-to-end hardware and software platform optimized to run AI and ML models.
Firstly, Intel has done extensive work to make the Xeon family of processors highly optimized for AI. The Intel Xeon Scalable processors outsmart GPUs in accelerating the training on large datasets.
Intel is telling its customers that they donâ??t need expensive GPUs until they meet a threshold. Most of the deep learning training can be effectively done on CPUs that cost a fraction of their GPU counterparts.
Beyond the marketing messages and claims, Intel went onto prove that their deep learning stack performs better than NVIDIA GPU-based stack. Recently, Intel published a benchmark to show its leadership in deep learning. Intel Xeon Scalable processers trained a deep learning network with 7878 images per second on ResNet-50 outperforming 7844 images per second on NVIDIA Tesla V100.
Intelâ??s performance optimization doesnâ??t come just from its CPUs. It is delivered by a purpose-built software stack that is highly optimized at various levels. From the operating system to the TensorFlow framework, Intel has tweaked multiple layers of software to deliver unmatched performance.
To ease the process of running this end-to-end stack, Intel has turned to one of its open source projects called Clear Linux OS. Clear Linux project was started as a purpose-built, container-optimized, and lightweight operating system. It was started with the premise that the OS running a container doesnâ??t need to perform all the functions of a traditional OS. Container Linux, the OS developed by CoreOS (now a part of Red Hat) followed the same philosophy.
Within a short span, Clear Linux gained popularity among open source developers. Intel kept improving the OS, making it relevant to run modern workloads such as machine learning training jobs, AI inferencing, analytics, and edge computing.
Clear Linux has become the foundation of Intelâ??s optimized software stack. Recently at the Open Source Technology Summit, Intel announced two reference architectures based on deep learning and advanced analytics stacks.
The Deep Learning Reference Stack is an integrated, highly-performant open source stack optimized for Intel Xeon Scalable Processors. This stack includes Intel Advanced Vector Extensions 512 Vector Neural Network Instructions (AVX 512 VNNI) and is designed to accelerate AI use cases such as image recognition, object detection, speech recognition, and language translation. Customers can use this to train complex neural networks meant for advanced use cases.
The Data Analytics Reference Stack was developed to help enterprises analyze, classify, recognize, and process large amounts of data, built on Intel Xeon Scalable platforms using Apache Hadoop and Apache Spark.
These stacks are built on the foundation of Clear Linux that run the containerized workloads. When running in distributed environments, Kubernetes is used as the preferred orchestrator.
Apart from deep learning and analytics, Intelâ??s Clear Linux can be used to run applications at the edge powered by AWS Greengrass.
Clear Linux OS can be even run on developer desktops.
Before you switch to expensive hardware and software stacks to run deep learning jobs, give Intelâ??s Clear Linux a chance.