Getting Moore with Less: How Chiplets and Open Interconnect Accelerate Cloud-Optimized AI Silicon
Presented by Mark Kuemerle (Marvell) | Ramin Farjadrad (Eliyan)
As today’s AI growth stresses hyperscale compute infrastructure, conventional semiconductor scaling techniques that they have come to rely on are starting to reach their natural limits. This has accelerated the need for silicon design innovation to provide the leaps in performance, power and space efficiency to keep pace with the speed of the AI revolution. With AI/ML accelerator and high-performance computing (HPC) chips running up against reticle limits in even the most advanced process nodes, chiplets are poised to take Moore’s Law in a more modular, vertical direction to advance high performance AI and computing. This panel of experts, moderated by a top industry analyst, will explore the fundamental technology building blocks of chiplet-driven designs and the open standards alternatives for die-to-die interconnect, including NVLink, BoW and UCIe.
Related Videos
- How Chiplets Accelerate Generative AI Applications
- Impact of Chiplets, Heterogeneous Integration and Modularity on AI and HPC systems
- Chiplets in 2029 and How We Got There
- Live with Cadence talking AI, Chiplets, Virtual Prototyping and more at Embedded World 2024
Latest Videos
- Electronica 2024 - Shaping the Future of Automotive Electronics with Chiplets and Sustainability
- UCIe 2.0 Specification: Advancing an open ecosystem for on-package chiplet innovation
- Accelerating AI Innovation with Arm Total Design: A Case Study
- The Rise of The Hublet™ & FPGA Chiplets
- From Internal Designs to Open Chiplet Economy: Discussion on How to Create Open, Democratized Access to Chiplet Technology