Breaking the Memory Wall: How d-Matrix Is Redefining AI Inference with Chiplets
By Maurizio Di Paolo Emilio, embedded.com | April 23, 2025
As AI workloads push the limits of performance, power efficiency, and memory bandwidth, chiplets are rapidly emerging as the architectural solution of choice. In this interview, Sree Ganesan, Vice President of Product at d-Matrix, dives deep into how its pioneering chiplet-based platform is revolutionizing AI inference. From solving the memory wall with Digital In-Memory Computing (DIMC) to enabling seamless multi-chiplet communication via custom interconnects, d-Matrix reveals how its innovations are unlocking 10x faster token generation, 3x better energy efficiency, and a scalable roadmap for generative AI.
To read the full article, click here
Related Chiplet
- DPIQ Tx PICs
- IMDD Tx PICs
- Near-Packaged Optics (NPO) Chiplet Solution
- High Performance Droplet
- Interconnect Chiplet
Related News
- EdgeCortix Awarded New 3 Billion Yen NEDO Project to Develop Advanced Energy-Efficient AI Chiplet for Edge Inference and Learning
- BOS and Tenstorrent Unveil Eagle-N, Industry’s First Automotive AI Accelerator Chiplet SoC
- DreamBig Semiconductor Announces Partnership with Samsung Foundry to Launch Chiplets for World Leading MARS Chiplet Platform on 4nm FinFET Process Technology Featuring 3D HBM Integration to Solve Scale-up and Scale-out Limitations of AI for the Masses
- D-Matrix Targets Fast LLM Inference for ‘Real World Scenarios’
Latest News
- CoAsia SEMI Commences Supply of 3D IC SoCs: Korea’s First Case, Positioning 3D IC as the Next HBM
- Eliyan Secures $50 Million in Strategic Investments from Leading Hyperscalers and AI Infrastructure Providers to Accelerate Scalable AI Systems
- Veeco and imec develop 300mm compatible process to enable integration of barium titanate on silicon photonics
- Lightmatter Introduces Guide Light Engine for AI, Featuring VLSP Technology
- Lightmatter and GUC Partner to Produce Co-Packaged Optics (CPO) Solutions for AI Hyperscalers