Intel® Shows OCI Optical I/O Chiplet Co-packaged with CPU at OFC2024, Targeting Explosive AI Scaling

At the Optical Fiber Conference in San Diego on March 26-28, 2024, Intel plans to demonstrate our advanced Optical Compute Interconnect (OCI) chiplet co-packaged with a prototype of a next-generation Intel CPU running live error-free traffic, giving the industry a look at the future of high-bandwidth compute interconnect.

Additionally, we plan to demonstrate our latest Silicon Photonics Tx and Rx ICs, designed to support emerging 1.6 Tbps pluggable connectivity applications in hyperscale data centers.
 

Optical I/O as an Enabler to Bringing AI Everywhere

Applications using AI are increasingly being deployed and positioned to be drivers of our global economy and to influence the evolution of our society in general. Recent developments in Large Language Models (LLM) and Generative AI have only accelerated that trend.

Larger and more efficient Machine Learning (ML) models will play a key role in addressing the emerging requirements of AI acceleration workloads. The need to significantly scale future compute fabrics drives exponential growth in I/O bandwidth, and longer reach in connectivity to support larger xPU clusters, as well as architectures with more efficient resource utilization, such as GPU disaggregation and memory pooling.

Electrical I/O (i.e., copper trace connectivity) supports high bandwidth density and low power, but only very short reaches of about 1 meter or less. Pluggable optical transceiver modules used in current data centers and early AI clusters can increase reach but at cost and power levels that are not sustainable with the scaling requirements for AI workloads immediately ahead of us. 

A co-packaged xPU (CPU, GPU, IPU) optical I/O solution can support higher bandwidths with high power efficiency, low latency, and longer reach, which is exactly what AI/ML infrastructure scaling requires.