Breaking the Memory Wall: How d-Matrix Is Redefining AI Inference with Chiplets
By Maurizio Di Paolo Emilio, embedded.com | April 23, 2025
As AI workloads push the limits of performance, power efficiency, and memory bandwidth, chiplets are rapidly emerging as the architectural solution of choice. In this interview, Sree Ganesan, Vice President of Product at d-Matrix, dives deep into how its pioneering chiplet-based platform is revolutionizing AI inference. From solving the memory wall with Digital In-Memory Computing (DIMC) to enabling seamless multi-chiplet communication via custom interconnects, d-Matrix reveals how its innovations are unlocking 10x faster token generation, 3x better energy efficiency, and a scalable roadmap for generative AI.
To read the full article, click here
Related Chiplet
- Automotive AI Accelerator
- Direct Chiplet Interface
- HBM3e Advanced-packaging chiplet for all workloads
- UCIe AP based 8-bit 170-Gsps Chiplet Transceiver
- UCIe based 8-bit 48-Gsps Transceiver
Related News
- Chiplet Architecture for AI Will Create New Demands for Assembly
- Alphawave Semi Tapes Out Industry-First, Multi-Protocol I/O Connectivity Chiplet for High-Performance Compute and AI Infrastructure
- YorChip announces Low Latency 200G Chiplet for edge AI
- Tenstorrent Licenses Baya Systems’ Fabric into next-generation AI and Compute Chiplet Solutions
Latest News
- Cadence and TSMC Advance AI and 3D-IC Chip Design with Certified Design Solutions for TSMC’s A16 and N2P Process Technologies
- Breaking the Memory Wall: How d-Matrix Is Redefining AI Inference with Chiplets
- Alphawave Semi Delivers Foundational AI Platform IP for Scale-Up and Scale-Out Networks
- Intel moves to chiplets for automotive AI
- Avicena Works with TSMC to Enable PD Arrays for LightBundle™ MicroLED-Based Interconnects