Getting Moore with Less: How Chiplets and Open Interconnect Accelerate Cloud-Optimized AI Silicon
Presented by Mark Kuemerle (Marvell) | Ramin Farjadrad (Eliyan)
As today’s AI growth stresses hyperscale compute infrastructure, conventional semiconductor scaling techniques that they have come to rely on are starting to reach their natural limits. This has accelerated the need for silicon design innovation to provide the leaps in performance, power and space efficiency to keep pace with the speed of the AI revolution. With AI/ML accelerator and high-performance computing (HPC) chips running up against reticle limits in even the most advanced process nodes, chiplets are poised to take Moore’s Law in a more modular, vertical direction to advance high performance AI and computing. This panel of experts, moderated by a top industry analyst, will explore the fundamental technology building blocks of chiplet-driven designs and the open standards alternatives for die-to-die interconnect, including NVLink, BoW and UCIe.
Related Videos
- How Chiplets Accelerate Generative AI Applications
- Impact of Chiplets, Heterogeneous Integration and Modularity on AI and HPC systems
- Chiplets in 2029 and How We Got There
- Live with Cadence talking AI, Chiplets, Virtual Prototyping and more at Embedded World 2024
Latest Videos
- Revolutionizing SoC Design: The Shift to Chiplet-Based Architectures
- Embedded World, Nuremberg: Arm’s Suraj Gajendra on AI, Chiplets, and the Future of Automotive Compute
- Keysight Expands Chiplet Interconnect Support with UCIe 2.0 & BoW
- Revolutionizing AI & Chiplets: Baya Systems CEO on $36M Series B, UALink, Future of Data Movement
- Machine Learning Applications in EDA for Chiplet Reliability