D-Matrix Targets Fast LLM Inference for ‘Real World Scenarios’
By Sally Ward-Foxton, EETimes (January 13, 2025)
Startup D-Matrix has built a chiplet-based data center AI accelerator optimized for fast, small batch LLM inference in the enterprise, in what the company calls “real-world scenarios.” The company’s novel architecture is based on modified SRAM cells for an all-digital compute-in-memory scheme that the company says is both fast and power efficient.
D-Matrix’s focus is on low-latency batch inference in enterprise data centers. For Llama3-8B, a D-Matrix server (16 four-chiplet chips on eight 600-W cards) can produce 60,000 tokens/second at 1 ms/token latency. For Llama3-70B, a rack of D-Matrix servers (128 four-chiplet chips in a 6-7 kW rack) can produce 30,000 tokens/second at 2 ms/token latency. D-Matrix customers can expect to achieve these figures for batch sizes in the order of 48-64, depending on context length, Sree Ganesan, head of product at D-Matrix, told EE Times.
To read the full article, click here
Related Chiplet
- Automotive AI Accelerator
- Direct Chiplet Interface
- HBM3e Advanced-packaging chiplet for all workloads
- UCIe AP based 8-bit 170-Gsps Chiplet Transceiver
- UCIe based 8-bit 48-Gsps Transceiver
Related News
- DreamBig World Leading "MARS" Open Chiplet Platform Enables Scaling of Next Generation Large Language Model (LLM), Generative AI, and Automotive Semiconductor Solutions
- DreamBig closes $75M Series B Funding Round, Co-led by Samsung Catalyst Fund and Sutardja Family to Enable AI Inference and Training Solutions to the Masses
- Breaking Through AI Inference Bottlenecks: MSquare Technology’s Cutting-Edge Solutions
- Revolutionary Flip-Chip Die Bonder from ITEC is 5x Faster than Market’s Best
Latest News
- Avicena Announces Modular LightBundle™ Optical Interconnect Platform with > 1Tbps/mm I/O density and < 1pJ/bit
- NHanced Semiconductors President Robert Patti to Detail “Foundry 2.0” at SEMIEXPO Heartland
- HyperLight Launches TFLN Chiplet™ Platform with Scalable 6-Inch Production and 8-Inch Expansion for Next-Gen AI and Photonics Infrastructure
- Intel’s Embarrassment of Riches: Advanced Packaging
- Challenges In Managing Chiplet Resources