Creating the Connectivity Required for AI Everywhere

By Tony Chan Carusone, CTO, Alphawave Semi

All major semiconductor companies now use chiplets for developing devices at leading-edge nodes. This approach requires a die-to-die interface within packages to provide very fast communications. Such an interface is particularly important for AI applications which are springing up everywhere, including both large systems and on the edge. AI requires high throughput, low latency, low energy consumption, and the ability to manage large data sets. The interface must handle needs ranging from enormous clusters requiring optical interconnects to portable, wearable, mobile, and remote systems that are extremely power-limited. It must also work with platforms such as the widely recognized ChatGPT and others that are on the horizon. The right interface with the right ecosystem is critical for the new world of AI everywhere.

Introduction
The Impact of AI
Custom Silicon is Transforming AI-Powered Data Centers
A Powerful Problem
Affordable AI Compute for the Next Generation
Connectivity Demands for AI
Connectivity Demands for AI
Example of xPU Cluster Network Topology
Connectivity Hardware in the Datacenter
Monolithic Solutions
Chiplets
Connectivity Hardware in the Datacenter
Optical Module Anatomy
Linear Pluggable Optical (LPO) Module Anatomy
Traditional DSP vs. Linear Pluggable Optics
Linear Pluggable Optics Usage Scenario
Co-Packaged Optics
Direct Drive
Digital Drive
Bandwidth Density Challenge
The Datacenter Memory Problem
Disaggregated Computing
The Distributed Datacenter
Optical Module Anatomy
Optical Module Anatomy with Chiplets
Takeaways
Thank you