D-Matrix Targets Fast LLM Inference for ‘Real World Scenarios’
By Sally Ward-Foxton, EETimes (January 13, 2025)
Startup D-Matrix has built a chiplet-based data center AI accelerator optimized for fast, small batch LLM inference in the enterprise, in what the company calls “real-world scenarios.” The company’s novel architecture is based on modified SRAM cells for an all-digital compute-in-memory scheme that the company says is both fast and power efficient.
D-Matrix’s focus is on low-latency batch inference in enterprise data centers. For Llama3-8B, a D-Matrix server (16 four-chiplet chips on eight 600-W cards) can produce 60,000 tokens/second at 1 ms/token latency. For Llama3-70B, a rack of D-Matrix servers (128 four-chiplet chips in a 6-7 kW rack) can produce 30,000 tokens/second at 2 ms/token latency. D-Matrix customers can expect to achieve these figures for batch sizes in the order of 48-64, depending on context length, Sree Ganesan, head of product at D-Matrix, told EE Times.
To read the full article, click here
Related Chiplet
- Direct Chiplet Interface
- HBM3e Advanced-packaging chiplet for all workloads
- UCIe AP based 8-bit 170-Gsps Chiplet Transceiver
- UCIe based 8-bit 48-Gsps Transceiver
- UCIe based 12-bit 12-Gsps Transceiver
Related News
- DreamBig World Leading "MARS" Open Chiplet Platform Enables Scaling of Next Generation Large Language Model (LLM), Generative AI, and Automotive Semiconductor Solutions
- DreamBig closes $75M Series B Funding Round, Co-led by Samsung Catalyst Fund and Sutardja Family to Enable AI Inference and Training Solutions to the Masses
- Breaking Through AI Inference Bottlenecks: MSquare Technology’s Cutting-Edge Solutions
- Revolutionary Flip-Chip Die Bonder from ITEC is 5x Faster than Market’s Best
Latest News
- MZ Technologies Unveils Next Generation Chiplet/Package Design Tool
- YorChip and ChipCraft announce low-cost, high-speed 200Ms/s ADC Chiplet
- Inside The World Of Advanced Packaging
- Vertical Compute, a new imec spin-off, raises €20 million to Revolutionize the Future of AI Computing
- D-Matrix Targets Fast LLM Inference for ‘Real World Scenarios’