New Approach to Die-to-Memory Chiplet Interconnects

By Ramin Farjadrad, CEO, Eliyan (February 2024)

Generative AI, with its huge demand for top performance, requires chiplets containing banks of high-speed memory close to processors. This architecture is essential to breaking through the so-called memory wall (limitations on memory bandwidth and capacity). However, the typical silicon interposer is not large enough to accommodate all the memory that today’s packages could hold. A new approach, called Universal Memory Interface (UMI), provides high-bandwidth D2D connectivity between compute and memory chiplets. It works with standard packaging (no interposer) to implement both high-speed transfers and in-memory computing.

Introduction
Chiplets Everywhere All at Once
Today’s Bottlenecks in AI Performance
Chiplet Systems will be the Future of Semiconductor
Terminology & Standards Background
UMI™ Doubles Memory Bandwidth Efficiency
Universal Memory Interconnect (UMI™) Chiplet
Custom HBM with UMI™ Delivers Major Differentiations
PHY Area on ASIC: HBM4 on CoWoS vs. UMI™ on Std Pkg
UMI™ Enables HBM & Compute ASIC Expansion
UMI™ on Adv Pkg Significantly Maximizes Precious ASIC Area
Conclusion
Thank You
AI/HPC Limitations: Memory & Compute Barriers
What Metrics Are Important?