Compression Enabled MRAM Memory Chiplet Subsystems for LLM Inference Accelerators
By Andy Green (Numen) and Nilesh Shah (Zeropoint Technologies)
This session unveils an AI-specific chiplet alternative to GPU-based inference that can deliver HBM-like bandwidth at 30-50% lower power.
Related Videos
- China Target Chiplet, will it be a shortcut for China semiconductor self sufficiency?
- Chiplet Architecture for Next Gen Infrastructure
- Paving the way for the Chiplet Ecosystem
- Standards for Chiplet Design with 3DIC Packaging (Part 1)
Latest Videos
- Enabling a true open ecosystem for 3D IC design
- Compression Enabled MRAM Memory Chiplet Subsystems for LLM Inference Accelerators
- D2D Chiplet Based SiP Testing Challenges and Solutions
- Advancing AI: The Role of HBM in the Chiplet Ecosystem Era
- The Democratization of Co-Design (DeCoDe) for Energy-Efficient Heterogeneous Computing