What’s Next for Multi-Die Systems in 2024?
By Shekhar Kapoor, Synopsys
It’s hard to imagine the level of systemic scale and complexity that is required to create a world that is truly smart. Applications such as ChatGPT, something we can’t live without, require massive amounts of data to function. The dataset of 300 billion words it was trained on, 60 million daily visits, and more than 10 million queries every day as of June 2023 are just the beginning. The more sophisticated technologies such as AI and high-performance computing (HPC) become, the greater the bandwidth and compute power they depend on.
Multi-die system architectures offer the means for innovation to continue to accelerate as Moore’s law slows, across areas from generative AI to autonomous vehicles and hyperscale data centers. While we are already seeing movement in this direction and will continue to see progress in 2024, uptake is nuanced, and design currently exists in a middle ground from 2D right up to 3D (even extending to 3.5D in some cases) according to performance, power, and area (PPA) requirements — or, more specifically, performance, power, form factor, and cost.
The smart future relies on multi-die system design, but it will need assistance to become a widespread reality in the coming year and beyond. Here are four of the top multi-die system design predictions coming in 2024.
To read the full article, click here
Related Chiplet
- Interconnect Chiplet
- 12nm EURYTION RFK1 - UCIe SP based Ka-Ku Band Chiplet Transceiver
- Bridglets
- Automotive AI Accelerator
- Direct Chiplet Interface
Related Technical Papers
- The chiplet universe is coming: What’s in it for you?
- Multi-Die Systems Reshape Semiconductor Innovation
- Optimizing Inter-chip Coupler Link Placement for Modular and Chiplet Quantum Systems
- The Next Frontier in Semiconductor Innovation: Chiplets and the Rise of 3D-ICs
Latest Technical Papers
- Mastering multi-physics effects in 3D IC design
- PICNIC: Silicon Photonic Interconnected Chiplets with Computational Network and In-memory Computing for LLM Inference Acceleration
- Optimizing Attention on GPUs by Exploiting GPU Architectural NUMA Effects
- Inter-chip Clock Network Synthesis on Passive Interposer of 2.5D Chiplet Considering Transmission Line Effect
- Simulation-Driven Evaluation of Chiplet-Based Architectures Using VisualSim