In a clear signal of its ambitions for the estimated $91.18 billion AI chip market, Intel this morning announced that it has acquired Habana Labs, an Israel-based developer of programmable AI and machine learning accelerators for cloud data centers. The deal is worth approximately $2 billion, and Intel says it’ll strengthen its AI strategy as Habana begins to sample its proprietary silicon to customers.
Habana — which previously raised $75 million in venture capital last November — will remain an independent business unit and will continue to be led by its current management team, and it’ll report to Intel’s data platforms group. Chairman Avigdor Willenz will serve as senior adviser to the business unit as well as to Intel.
“This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need — from the intelligent edge to the data center,” said executive vice president and general manager of the data platforms group at Intel Navin Shenoy. “More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI [compute requirements].”
Habana offers two silicon products targeting workloads in AI and machine learning: the Gaudi AI Training Processor and the Goya AI Inference Processor. The former, which is optimized for “hyperscale” environments, is anticipated to power data centers that deliver up to four times the throughput versus systems built with the equivalent number of graphics chips at half the energy per chip (140 watts). As for the Goya processor, which was unveiled in June and which is now commercially available, it offers up to three times the AI inferencing performance as Nvidia chips where throughput and latency are concerned.
Gaudi is available as a standard PCI-Express card as well as a mezzanine card that is compliant with the Open Compute Project accelerator module specs. It features one of the industry’s first on-die implementation of Remote Direct Memory Access over Ethernet (RDMA and RoCE) on an AI chip, which provides ten 100Gb or 20 50Gb communication links, enabling it to scale up to as many “thousands” of discrete accelerator cards. (A complete system with eight Gaudis called the HLS-1 will ship in the coming months.)
Goya will complement Intel’s in-house Nervana NNP-I, code-named Springhill, which is based on a 10-nanometer Ice Lake processor that will allow it to cope with high workloads using minimal amounts of energy. As for Guadi, it’ll slot alongside Intle’s Nervana Neural Net L-1000 (code-named Spring Crest), which is optimized for image recognition and which has an architecture distinct from other chips in that it lacks a standard cache hierarchy and its on-chip memory is managed directly by software. (Intel has previously claimed the NNP-T’s 24 compute clusters, 32GB of HBM2 stacks, and local SRAM deliver up to 10 times the AI training performance of competing graphics cards.)
On the software side of the equation, Habana offers a development and execution environment — SynapseAI — with libraries and a JIT compiler designed to help customers deploy solutions as AI workloads. Importantly, it supports all of the standard AI and machine learning frameworks (e.g., Google’s TensorFlow and Facebook’s PyTorch), as well as the Open Neural Network Exchange format championed by Microsoft, IBM, Huawei, Qualcomm, AMD, ARM, and others.
“We have been fortunate to get to know and collaborate with Intel given its investment in Habana, and we’re thrilled to be officially joining the team,” said CEO of Habana David Dahan. “Intel has created a world-class AI team and capability. We are excited to partner with Intel to accelerate and scale our business. Together, we will deliver our customers more AI innovation, faster.”
The future of Intel is AI. Its books imply as much — the Santa Clara company’s AI chip segments notched $3.5 billion in revenue this year, and it expects the market opportunity to grow 30% annually from $2.5 billion in 2017 to $10 billion by 2022. Putting this into perspective, AI chip revenues were up from $1 billion a year in 2017.
Intel’s purchase of Habana comes after its acquisition of San Mateo-based Movidius, which designs specialized low-power processor chips for computer vision, in September 2016. It bought field-programmable gate array (FPGA) manufacturer Altera in 2015 and a year later acquired Nervana, filling out its hardware platform offerings and setting the stage for an entirely new generation of AI accelerator chipsets. And in August, Intel snatched up Vertex.ai, a startup developing a platform-agnostic AI model suite.
Source: Read Full Article