Chiplets for generative AI
By Jawad Nasrullah, CEO - Palo Alto Electron
Generative AI models, known for their large size and substantial computational demands, are pushing the boundaries of traditional computing infrastructure. As the industry seeks solutions to mitigate costs, execution times, and the environmental impact of these models, the concept of scale-out computing traditionally seen at the data center level is being integrated into IC (Integrated Circuit) packaging using chiplet technology. This integration aims to address the challenges of power consumption and thermal design. The talk explores innovative strategies to enhance chip efficiency and reduce overheads. Key approaches include the development of AI-specific core chiplets, the implementation of efficient communication fabrics, the expansion of on-chip memory, the incorporation of more components within the IC package, the improvement of die-to-die interfaces, and the adoption of vertical chip stacking technologies. These techniques are vital for reducing power and mitigating hotspots.
Related Videos
- How Chiplets Accelerate Generative AI Applications
- Chiplets for the future of AI
- Connectivity for AI Everywhere: The Role of Chiplets
- Photonic Fabric Interface Chiplets for AI XPU Optical Connectivity
Latest Videos
- Custom Tool Development Strategies for Chiplet Reliability
- Imec Automotive Chiplet Forum 2025: Interview with Arm's John Kourentis
- Imec Automotive Chiplet Forum 2025: Interview with imec's Bart Plackle
- Chiplets for Automotive: Scalable Compute with Next-Gen Packaging
- Investigating Open Chiplets for HPC Beyond Exascale