Breaking the Memory Wall: How d-Matrix Is Redefining AI Inference with Chiplets
By Maurizio Di Paolo Emilio, embedded.com | April 23, 2025
As AI workloads push the limits of performance, power efficiency, and memory bandwidth, chiplets are rapidly emerging as the architectural solution of choice. In this interview, Sree Ganesan, Vice President of Product at d-Matrix, dives deep into how its pioneering chiplet-based platform is revolutionizing AI inference. From solving the memory wall with Digital In-Memory Computing (DIMC) to enabling seamless multi-chiplet communication via custom interconnects, d-Matrix reveals how its innovations are unlocking 10x faster token generation, 3x better energy efficiency, and a scalable roadmap for generative AI.
To read the full article, click here
Related Chiplet
- 12nm EURYTION RFK1 - UCIe SP based Ka-Ku Band Chiplet Transceiver
- Interconnect Chiplet
- Bridglets
- Automotive AI Accelerator
- Direct Chiplet Interface
Related News
- EdgeCortix Awarded New 3 Billion Yen NEDO Project to Develop Advanced Energy-Efficient AI Chiplet for Edge Inference and Learning
- DreamBig closes $75M Series B Funding Round, Co-led by Samsung Catalyst Fund and Sutardja Family to Enable AI Inference and Training Solutions to the Masses
- Chiplet Summit to Focus on New Packages and AI Applications in 2025
- Femtosense Combines AI Chiplet with MCU for Audio SiP
Latest News
- Arm Reportedly Weighs Chiplet and Solution Development, Raising Customer Tensions
- SiMa.ai Expands Strategic Collaboration with Synopsys to Accelerate Automotive AI Innovation
- When Standards Enable Chiplets
- Multibeam Secures $31 Million in Series B Financing to Accelerate Global Deployment of E-Beam Lithography Production Solutions
- Alphawave Semi Highlights Why the Next Generation of AI Advances Demand Chiplet Architectures at EE Times: The Future of Chiplets