Chiplets for generative AI
By Jawad Nasrullah, CEO - Palo Alto Electron
Generative AI models, known for their large size and substantial computational demands, are pushing the boundaries of traditional computing infrastructure. As the industry seeks solutions to mitigate costs, execution times, and the environmental impact of these models, the concept of scale-out computing traditionally seen at the data center level is being integrated into IC (Integrated Circuit) packaging using chiplet technology. This integration aims to address the challenges of power consumption and thermal design. The talk explores innovative strategies to enhance chip efficiency and reduce overheads. Key approaches include the development of AI-specific core chiplets, the implementation of efficient communication fabrics, the expansion of on-chip memory, the incorporation of more components within the IC package, the improvement of die-to-die interfaces, and the adoption of vertical chip stacking technologies. These techniques are vital for reducing power and mitigating hotspots.
Related Videos
- How Chiplets Accelerate Generative AI Applications
- Chiplets for the future of AI
- Connectivity for AI Everywhere: The Role of Chiplets
- Photonic Fabric Interface Chiplets for AI XPU Optical Connectivity
Latest Videos
- Enabling a true open ecosystem for 3D IC design
- Compression Enabled MRAM Memory Chiplet Subsystems for LLM Inference Accelerators
- D2D Chiplet Based SiP Testing Challenges and Solutions
- Advancing AI: The Role of HBM in the Chiplet Ecosystem Era
- The Democratization of Co-Design (DeCoDe) for Energy-Efficient Heterogeneous Computing