Compression Enabled MRAM Memory Chiplet Subsystems for LLM Inference Accelerators
By Andy Green (Numen) and Nilesh Shah (Zeropoint Technologies)
This session unveils an AI-specific chiplet alternative to GPU-based inference that can deliver HBM-like bandwidth at 30-50% lower power.
Related Chiplet
- DPIQ Tx PICs
- IMDD Tx PICs
- Near-Packaged Optics (NPO) Chiplet Solution
- High Performance Droplet
- Interconnect Chiplet
Related Videos
- Chiplet Quilting for the Age of Inference
- China Target Chiplet, will it be a shortcut for China semiconductor self sufficiency?
- Chiplet Architecture for Next Gen Infrastructure
- Paving the way for the Chiplet Ecosystem
Latest Videos
- 2026 Predictions from Alpahwave Semi, now part of Qualcomm
- Arm Viewpoints: Chiplets explained – the technology and economics behind the next wave of silicon innovation
- The State of Multi-Die: Insights and Customer Requirements
- Coding approaches for increasing reliability and energy efficiency of 3D technologies
- AI-Driven Thermal Prediction for Enhanced Reliability in 3D HBM Chiplets