Intel has unveiled new technologies and architectures poised to accelerate the AI ecosystem – from the data centre, cloud and network to the edge and PC.

At the recent Computex event, Intel announced the following:

* The launch of Intel Xeon 6 processors with Efficient-cores (E-cores), delivering performance and power efficiency for high-density, scale-out workloads in the data center. Enables 3:1 rack consolidation, rack-level performance gains of up to 4.2x and performance per watt gains of up to 2.6x

* Pricing for Intel Gaudi 2 and Intel Gaudi 3 AI accelerator kits, delivering high performance with up to one-third lower cost compared to competitive platforms. The combination of Xeon processors with Gaudi AI accelerators in a system offers a powerful solution for making AI faster, cheaper and more accessible.

* The Lunar Lake client processor architecture to continue to grow the AI PC category. The next generation of AI PCs – with breakthrough x86 power efficiency and no-compromise application compatibility – will deliver up to 40% lower system-on-chip (SoC) power when compared with the previous generation.

“AI is driving one of the most consequential eras of innovation the industry has ever seen,” says Intel CEO Pat Gelsinger. “The magic of silicon is once again enabling exponential advancements in computing that will push the boundaries of human potential and power the global economy for years to come.”

In just six months, Intel has progressed from launching 5th Gen Intel Xeon processors to introducing the inaugural member of the Xeon 6 family; from previewing Gaudi AI accelerators to offering enterprise customers a cost-effective, high-performance generative AI (GenAI) training and inference system; and from ushering in the AI PC era with Intel Core Ultra processors in more than 8-million devices to unveiling the forthcoming client architecture slated for release later this year.

 

Modernising the Data Centre for AI

The first of the Xeon 6 processors to debut is the Intel Xeon 6 E-core (code-named Sierra Forest), which is available beginning today. Xeon 6 P-cores (code-named Granite Rapids) are expected to launch next quarter.

With high core density and exceptional performance per watt, Intel Xeon 6 E-core delivers efficient compute with significantly lower energy costs. The improved performance with increased power efficiency is perfect for the most demanding high-density, scale-out workloads, including cloud-native applications and content delivery networks, network microservices and consumer digital services.

Additionally, Xeon 6 E-core has tremendous density advantages, enabling rack-level consolidation of 3-to-1, providing customers with a rack-level performance gain of up to 4,2x and performance per watt gain of up to 2,6x when compared with 2nd Gen Intel Xeon processors on media transcode workloads1. Using less power and rack space, Xeon 6 processors free up compute capacity and infrastructure for innovative new AI projects.

 

Intel Gaudi AI Accelerators

Intel Xeon processors are the ideal CPU head node for AI workloads and operate in a system with Intel Gaudi AI accelerators, which are purposely designed for AI workloads. Together, these two offer a powerful solution that integrates into existing infrastructure.

Intel Gaudi 3 accelerators will deliver significant performance improvements for training and inference tasks on leading GenAI models, helping enterprises unlock the value in their proprietary data. Intel Gaudi 3 in an 8,192-accelerator cluster is projected to offer up to 40% faster time-to-train versus the equivalent size Nvidia H100 GPU cluster and up to 15% faster training6 throughput for a 64-accelerator cluster versus Nvidia H100 on the Llama2-70B model.

In addition, Intel Gaudi 3 is projected to offer an average of up to 2x faster inferencing versus Nvidia H100, running popular LLMs such as Llama-70B and Mistral-7B.

 

Accelerating On-Device AI

AI PCs are projected to make up 80% of the PC market by 2028, according to Boston Consulting Group. In response, Intel has moved quickly to create the best hardware and software platform for the AI PC, enabling more than 100 independent software vendors (ISVs), 300 features and support of 500 AI models across its Core Ultra platform.

The company has revealed the architectural details of Lunar Lake – the flagship processor for the next generation of AI PCs. With a leap in graphics and AI processing power, and a focus on power-efficient compute performance for the thin-and-light segment, Lunar Lake will deliver up to 40% lower SoC power and more than 3-times the AI compute8. It’s expected to ship in the third quarter of 2024.

Lunar Lake’s architecture will enable:

* New Performance-cores (P-cores) and Efficient-cores (E-cores) deliver significant performance and energy efficiency improvements.

* A fourth-generation Intel neural processing unit (NPU) with up to 48 tera-operations per second (TOPS) of AI performance. This powerful NPU delivers up to 4x AI compute over the previous generation, enabling corresponding improvements in generative AI.

* An all-new GPU design, code-named Battlemage, combines two new innovations: Xe2 GPU cores for graphics and Xe Matrix Extension (XMX) arrays for AI. The Xe2 GPU cores improve gaming and graphics performance by 1.5x over the previous generation, while the new XMX arrays enable a second AI accelerator with up to 67 TOPS of performance for extraordinary throughput in AI content creation.

* Advanced low-power island, a novel compute cluster and Intel innovation that handles background and productivity tasks with extreme efficiency, enabling better laptop battery life.