Chiplet Architecture for AI Will Create New Demands for Assembly
By Nolan Johnson, SMT007 Magazine (May 28, 2024)
As we examine the entire AI ecosystem more closely, it becomes clear that AI algorithms are intensely hungry for compute power. This demand is accelerating beyond the customary rate predicted by Moore’s Law, just as traditional semiconductor fabrication methods are failing to maintain Moore’s Law. It’s a real dilemma.
Those watching AI say that advanced packaging techniques, which have been in R&D development for some time, see AI as their killer app. AI is needed to propel these cutting-edge packages into the mainstream.
At a 2022 symposium on advanced packaging in Washington, D.C., I met Dale McHerron, a researcher on AI compute hardware. As we discussed IBM’s work in this area, Dale introduced me to Arvind Kumar, a principal research scientist and manager in AI hardware and chiplet architectures.
I reached out to Arvind to discuss his keynote presentation at the recent IMAPS conference where he discussed the AI hardware ecosystem and role of advanced packaging. Those in the assembly services industry know that any new packages will require accurate and reliable placement on the EMS manufacturing floor. Arvind shares his perspective and some predictions based on his research. It is also clear there is still much coordination and communication needed to make this work.
Nolan Johnson: What is chiplet architecture and why does it matter? How is advanced packaging moving forward?
Arvind Kumar: Chiplet architectures, which allow the partitioning of complex designs into tightly co-packaged sub-elements, are influencing the way we think about packaging. We would like to put more chips into a single package and have them talk to each other with high bandwidth, low latency, and low-energy interconnects. That goal is driving emerging packaging technologies to higher interconnect densities, more routing layers, and larger body sizes.
Johnson: Ever since semiconductors were developed, we’ve used the thought pattern of making bigger monolithic chips. Why the change?
Kumar: For a long time, the fundamental idea was that we could get more performance out of larger dies at the most advanced technology node. Fabricating all parts of a chip at the most advanced node is getting very expensive and has major yield challenges, so this drives us toward smaller die sizes. Additionally, we can partition the chip such that some parts that don’t scale well can remain in an older technology node. That's a very natural fit for chiplet architecture.
To read the full article, click here
Related Chiplet
- 12nm EURYTION RFK1 - UCIe SP based Ka-Ku Band Chiplet Transceiver
- Interconnect Chiplet
- Bridglets
- Automotive AI Accelerator
- Direct Chiplet Interface
Related News
- Ayar Labs Unveils World's First UCIe Optical Chiplet for AI Scale-Up Architectures
- YorChip announces Low Latency 200G Chiplet for edge AI
- System-level UCIe IP for early architecture analysis of 3D Chiplet Design and Packaging
- Tenstorrent Licenses Baya Systems’ Fabric into next-generation AI and Compute Chiplet Solutions
Latest News
- Cadence Accelerates SoC, 3D-IC and Chiplet Design for AI Data Centers, Automotive and Connectivity in Collaboration with Samsung Foundry
- Synopsys Accelerates AI and Multi-Die Design Innovation on Advanced Samsung Foundry Processes
- Baya Systems to Share Insights on Chiplet Integration, AI Scalability at DAC 2025
- Imec and Tokyo Electron extend partnership to accelerate the development of beyond-2nm nodes
- Multi-Die Assemblies Complicate Parasitic Extraction