Chiplet Architecture for AI Will Create New Demands for Assembly
By Nolan Johnson, SMT007 Magazine (May 28, 2024)
As we examine the entire AI ecosystem more closely, it becomes clear that AI algorithms are intensely hungry for compute power. This demand is accelerating beyond the customary rate predicted by Moore’s Law, just as traditional semiconductor fabrication methods are failing to maintain Moore’s Law. It’s a real dilemma.
Those watching AI say that advanced packaging techniques, which have been in R&D development for some time, see AI as their killer app. AI is needed to propel these cutting-edge packages into the mainstream.
At a 2022 symposium on advanced packaging in Washington, D.C., I met Dale McHerron, a researcher on AI compute hardware. As we discussed IBM’s work in this area, Dale introduced me to Arvind Kumar, a principal research scientist and manager in AI hardware and chiplet architectures.
I reached out to Arvind to discuss his keynote presentation at the recent IMAPS conference where he discussed the AI hardware ecosystem and role of advanced packaging. Those in the assembly services industry know that any new packages will require accurate and reliable placement on the EMS manufacturing floor. Arvind shares his perspective and some predictions based on his research. It is also clear there is still much coordination and communication needed to make this work.
Nolan Johnson: What is chiplet architecture and why does it matter? How is advanced packaging moving forward?
Arvind Kumar: Chiplet architectures, which allow the partitioning of complex designs into tightly co-packaged sub-elements, are influencing the way we think about packaging. We would like to put more chips into a single package and have them talk to each other with high bandwidth, low latency, and low-energy interconnects. That goal is driving emerging packaging technologies to higher interconnect densities, more routing layers, and larger body sizes.
Johnson: Ever since semiconductors were developed, we’ve used the thought pattern of making bigger monolithic chips. Why the change?
Kumar: For a long time, the fundamental idea was that we could get more performance out of larger dies at the most advanced technology node. Fabricating all parts of a chip at the most advanced node is getting very expensive and has major yield challenges, so this drives us toward smaller die sizes. Additionally, we can partition the chip such that some parts that don’t scale well can remain in an older technology node. That's a very natural fit for chiplet architecture.
Related Chiplet
- UCIe AP based 8-bit 170-Gsps Chiplet Transceiver
- UCIe based 8-bit 48-Gsps Transceiver
- UCIe based 12-bit 12-Gsps Transceiver
- 400G Transmitter Chiplet for 400G, 800G and 1.6T Pluggable Transceivers
- FPGA Chiplets with 40K -600K LUTS
Related News
- Chiplet Summit Announces Its Initial Keynote Schedule With Emphasis on AI Applications
- DreamBig World Leading "MARS" Open Chiplet Platform Enables Scaling of Next Generation Large Language Model (LLM), Generative AI, and Automotive Semiconductor Solutions
- YorChip, Inc. announces its first Chiplet for Edge AI applications with IP licensed from Semidynamics, the leader in RISC-V IP based in Barcelona
- Edge AI chiplet uses SemiDynamics RISC-V cores
Latest News
- EdgeCortix Receives 4 Billion Yen Subsidy from Japan’s NEDO to Advance Energy-Efficient AI Chiplets for Post-5G Communication Systems
- NAPMP announces chiplets R&D area
- Tenstorrent Expands Deployment of Arteris’ Network-on-Chip IP to Next-Generation of Chiplet-Based AI Solutions
- Arm's Data Center Advances: Chiplets, Efficiency & AI Integration
- Chiplets Make Progress Using Interconnects As Glue