Compression Enabled MRAM Memory Chiplet Subsystems for LLM Inference Accelerators
By Andy Green (Numen) and Nilesh Shah (Zeropoint Technologies)
This session unveils an AI-specific chiplet alternative to GPU-based inference that can deliver HBM-like bandwidth at 30-50% lower power.
Related Chiplet
- 12nm EURYTION RFK1 - UCIe SP based Ka-Ku Band Chiplet Transceiver
- Interconnect Chiplet
- Bridglets
- Automotive AI Accelerator
- Direct Chiplet Interface
Related Videos
- China Target Chiplet, will it be a shortcut for China semiconductor self sufficiency?
- Chiplet Architecture for Next Gen Infrastructure
- Paving the way for the Chiplet Ecosystem
- Standards for Chiplet Design with 3DIC Packaging (Part 1)
Latest Videos
- The hidden heat challenge of 3D IC: and what designers need to know
- DAC 2025: Arteris CMO Michal Siwinski Debunks Chiplet Misunderstandings
- DAC 2025: Cadence and Its Ambition to Jumpstart the Chiplet Marketplace
- The Chiplet Problem No One Talks About
- Scaling Open Compute: RISC-V, Chiplets, and the Future of AI and Robotics